The very best educational and news content for Stable Diffusion, AI, LLMs, SDXL, LoRA, DreamBooth AI Lectures, Voice Cloning, DeepFake, Tech, Tips
Featured
Featured
Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Fav Movie Star! Better than Roop & Face Fusion
#Rope is the newest 1-Click, most easy to use, most advanced open source Deep Fake application. It has been just published yesterday. In this tutorial I will show you how to use Rope Pearl DeepFake application. Rope is way better than Roop, #Roop Unleashed and #FaceFusion. It supports multi-face Face Swapping and making amazing DeepFake videos so easily with 1-Click. Select video, select faces and generate your DeepFake 4K ultra-HD video.
1-Click Rope Installers Scripts ⤵️
https://www.patreon.com/posts/most-advanced-1-105123768
How To Install Requirements Tutorial (Python, Git, FFmpeg, CUDA, C++ Tools) ⤵️
https://youtu.be/-NjNy7afOQ0
Official Rope GitHub Repository ⤵️
https://github.com/Hillobar/Rope
Rope's Author Donation Link - Support Him For Better APP ⤵️
https://www.paypal.com/donate/?hosted_button_id=Y5SB9LSXFGRF2
0:00 Example Deepfake video from movie Inglourious Basterds 2009
0:21 Introduction to the most easy to use and most advanced 1-Click Deepfake application Rope Pearl
0:53 How to download 1-Click installer scripts and start installing Rope Pearl
1:34 What are the requirements of Deepfake app Rope Pearl and how to check and install them
1:44 How to check and verify your Python, Git, CUDA and FFmpeg installations
3:42 Example images and a test video that I prepared and sharing
4:10 How to start Rope Deepfake application after the installation has been completed
4:27 How to use Rope Pearl Deepfake application - first select videos and images folders
5:00 How to refresh and re-populate selected videos and faces folders
5:26 How to set the outputs folder where the Deepfake videos and images will be saved
5:45 How Rope Pearl the most advanced Deepfake application work, select input video and target faces
6:34 How to make swapped, deep faked faces HD from low resolution
7:01 How to further improve face quality with face restoration AI models automatically
7:49 How to make additional changes to fix artifacts and mistakes in the Deepfaked video
8:27 Support link to support author of Rope developer
8:37 How to test and see each changes effect immediately
9:00 The tests and configurations I have pre-prepared for you
9:19 How to use Face Parser to fix the mouth movement
9:53 How to reduce VRAM usage and increase processing speed with number of threads
10:13 How to export and save Deepfake applied new video
12:12 Where will be the output / exported video saved
12:33 Important face detection models Retina face, Yolo and SCRDF - try them if face detection fails
13:34 How to understand when the Deepfake video processing is completed
13:59 Properties of the generated Deepfake video, e.g. resolution, bitrate
14:24 How to Deep Fake / Face Swap images not videos
15:30 How to save deep faked images
15:43 What is auto swap and how to use it
16:10 How to find best working face before start processing the video
17:13 How to automatically install and use Rope DeepFake AI on a Linux system
Deepfake Tutorial: Rope-Pearl Application for Face Swapping in Videos and Images
Installation
Download the installer files from the provided link in the video description
Extract the files to your desired installation location (e.g., rope_ai folder)
Ensure you have the necessary prerequisites installed:
Python 3.10.11
Git
FFmpeg
CUDA
Run the install.bat file to start the installation process
The installer will download the necessary models and set up a virtual environment
Using Rope-Pearl for Video Face Swapping
Open Rope-Pearl by double-clicking the windows_start.bat file
Select the videos folder containing your input video
Select the faces folder containing the face images you want to use for swapping
Click "Start Rope" to refresh the interface with the latest files
Select the output folder where the processed video will be saved
Select the video you want to modify
Click "Find Faces" to detect faces in the video
Select the face you want to replace and the face you want to replace it with
Adjust the Swapper Resolution to enhance the quality (up to 512 pixels)
Enable the restorer and choose GPEN512 for best results
Fine-tune the blend ratio to make the face swap look more natural
Enable strength and adjust size border distance to fix errors
Use the Occluder and Face Parser to improve mouth movements and fix other issues
Set the number of threads based on your GPU's capabilities
Choose the output video quality
Click the record icon and then play to start processing the video with the face swap
Using Rope-Pearl for Image Face Swapping
Switch to the image tab in Rope-Pearl
Select your source image and click "Find Faces"
Select the face you want to replace and the target face
Enable "Swap Faces" and adjust settings as needed (Swapper Resolution, Restorer, etc.)
Use the "Auto Swap" feature to automatically apply the selected face to new images
Click "Save Image" to save the face-swapped image to the output folder
Additional Tips and Information
Try different face detection models (Retina Face, Yolo v8, SCRDF)
5
views
1
comment
V-Express: 1-Click AI Avatar Talking Heads Video Animation Generator - D-ID Alike - Free Open Source
YouTube Tutorial : https://youtu.be/xLqDTVWUSec
Ever wished your static images could talk like magic? Meet V-Express, the groundbreaking open-source and free tool that breathes life into your photos! Whether you have an audio clip or a video, V-Express animates your images to create stunning talking avatars. Just like the acclaimed D-ID Avatar, Wav2Lip, and Avatarify, V-Express turns your still photos into dynamic, speaking personas, but with a twist—it's completely open-source and free to use! With seamless audio integration and the ability to mimic video expressions, V-Express offers an unparalleled experience without any cost or restrictions. Experience the future of digital avatars today—let's dive into how you can get started with V-Express and watch your images come alive!
1-Click V-Express Installers Scripts ⤵️
https://www.patreon.com/posts/105251204
Requirements Step by Step Tutorial ⤵️
https://youtu.be/-NjNy7afOQ0
Massed Compute Register and Login ⤵️
https://vm.massedcompute.com/signup?linkId=lp_034338&sourceId=secourses&tenantId=massed-compute
Official Rope GitHub Repository ⤵️
https://github.com/tencent-ailab/V-Express
SECourses Discord Channel to Get Full Support ⤵️
https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
0:00 Introduction to the V-Express with demo showcase
1:23 The features of the V-Express talking avatars app
2:02 How to download and install V-Express on Windows
3:29 Which requirements are necessary and how to install and verify them
4:56 How to uninstall my scripts installed apps
5:35 How to save installation logs to send me in case of any error
6:05 How to start using V-Express Gradio app after installation and the settings of the app
8:14 Explanation of auto cropping
9:05 Generating first example video and how much VRAM it is using and how much time it is taking
10:57 The location of where generated videos are saved
Transforming Static Images into Dynamic Videos: A Comprehensive Guide
In the evolving landscape of digital content creation, transforming static images into dynamic, talking avatars is no longer a complex task reserved for professionals. With advancements in AI technology, applications like Tencent AI Lab's V-Express, D-ID, and other commercial tools have made this process accessible to everyone. This article delves into the functionalities of these applications, focusing on how they can be utilized to create engaging video content from static images, thereby enhancing your content's SEO and overall impact.
Introduction to Tencent AI Lab V-Express
Tencent AI Lab V-Express is an innovative open-source application designed to convert static images into talking avatars. This tool supports both audio and video inputs, making it versatile for various content creation needs. Here's a step-by-step guide on how to install and use V-Express on Windows.
Installation Guide
Preparation: Download the V-Express zip files and demo images from the provided links. Avoid using space characters in folder names to prevent path handling issues.
Extraction: Extract the downloaded zip files into your chosen directory.
Installation: Double-click the windows_install.bat file. This will install the application into a virtual environment, ensuring it doesn’t conflict with other applications.
Configuration: Verify the installation of Python 3.10.11, Git, FFmpeg, CUDA 11.8, and C++ tools by running specific commands in CMD.
Execution: Once installed, double-click the windows_start.bat file to start the application.
Using V-Express
Upload: Upload a static image and an audio or video file.
Settings: Configure settings like retarget strategy, video width, and height, VRAM usage, and face focus expansion.
Generation: Click generate to create the video. The application will save the output in the specified folder.
Exploring D-ID and Other Commercial Apps
D-ID
D-ID is a commercial application known for its advanced capabilities in transforming static images into videos. It offers features like:
Realistic Animations: Creates highly realistic talking avatars.
Customization: Allows users to customize facial expressions and movements.
Ease of Use: User-friendly interface suitable for non-technical users.
Other Notable Apps
Synthesia: Specializes in creating AI-generated videos with human-like avatars. It’s widely used for corporate training and marketing.
Reallusion iClone: Offers robust tools for 3D animation and character creation, making it ideal for professional animators.
DeepBrain: Focuses on converting text to speech with animated avatars, perfect for educational content.
6
views
1
comment
Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Fav Movie Star! Better than Roop & Face Fusion
#Rope is the newest 1-Click, most easy to use, most advanced open source Deep Fake application. It has been just published yesterday. In this tutorial I will show you how to use Rope Pearl DeepFake application. Rope is way better than Roop, #Roop Unleashed and #FaceFusion. It supports multi-face Face Swapping and making amazing DeepFake videos so easily with 1-Click. Select video, select faces and generate your DeepFake 4K ultra-HD video.
1-Click Rope Installers Scripts ⤵️
https://www.patreon.com/posts/most-advanced-1-105123768
How To Install Requirements Tutorial (Python, Git, FFmpeg, CUDA, C++ Tools) ⤵️
https://youtu.be/-NjNy7afOQ0
Official Rope GitHub Repository ⤵️
https://github.com/Hillobar/Rope
Rope's Author Donation Link - Support Him For Better APP ⤵️
https://www.paypal.com/donate/?hosted_button_id=Y5SB9LSXFGRF2
0:00 Example Deepfake video from movie Inglourious Basterds 2009
0:21 Introduction to the most easy to use and most advanced 1-Click Deepfake application Rope Pearl
0:53 How to download 1-Click installer scripts and start installing Rope Pearl
1:34 What are the requirements of Deepfake app Rope Pearl and how to check and install them
1:44 How to check and verify your Python, Git, CUDA and FFmpeg installations
3:42 Example images and a test video that I prepared and sharing
4:10 How to start Rope Deepfake application after the installation has been completed
4:27 How to use Rope Pearl Deepfake application - first select videos and images folders
5:00 How to refresh and re-populate selected videos and faces folders
5:26 How to set the outputs folder where the Deepfake videos and images will be saved
5:45 How Rope Pearl the most advanced Deepfake application work, select input video and target faces
6:34 How to make swapped, deep faked faces HD from low resolution
7:01 How to further improve face quality with face restoration AI models automatically
7:49 How to make additional changes to fix artifacts and mistakes in the Deepfaked video
8:27 Support link to support author of Rope developer
8:37 How to test and see each changes effect immediately
9:00 The tests and configurations I have pre-prepared for you
9:19 How to use Face Parser to fix the mouth movement
9:53 How to reduce VRAM usage and increase processing speed with number of threads
10:13 How to export and save Deepfake applied new video
12:12 Where will be the output / exported video saved
12:33 Important face detection models Retina face, Yolo and SCRDF - try them if face detection fails
13:34 How to understand when the Deepfake video processing is completed
13:59 Properties of the generated Deepfake video, e.g. resolution, bitrate
14:24 How to Deep Fake / Face Swap images not videos
15:30 How to save deep faked images
15:43 What is auto swap and how to use it
16:10 How to find best working face before start processing the video
17:13 How to automatically install and use Rope DeepFake AI on a Linux system
Deepfake Tutorial: Rope-Pearl Application for Face Swapping in Videos and Images
Installation
Download the installer files from the provided link in the video description
Extract the files to your desired installation location (e.g., rope_ai folder)
Ensure you have the necessary prerequisites installed:
Python 3.10.11
Git
FFmpeg
CUDA
Run the install.bat file to start the installation process
The installer will download the necessary models and set up a virtual environment
Using Rope-Pearl for Video Face Swapping
Open Rope-Pearl by double-clicking the windows_start.bat file
Select the videos folder containing your input video
Select the faces folder containing the face images you want to use for swapping
Click "Start Rope" to refresh the interface with the latest files
Select the output folder where the processed video will be saved
Select the video you want to modify
Click "Find Faces" to detect faces in the video
Select the face you want to replace and the face you want to replace it with
Adjust the Swapper Resolution to enhance the quality (up to 512 pixels)
Enable the restorer and choose GPEN512 for best results
Fine-tune the blend ratio to make the face swap look more natural
Enable strength and adjust size border distance to fix errors
Use the Occluder and Face Parser to improve mouth movements and fix other issues
Set the number of threads based on your GPU's capabilities
Choose the output video quality
Click the record icon and then play to start processing the video with the face swap
Using Rope-Pearl for Image Face Swapping
Switch to the image tab in Rope-Pearl
Select your source image and click "Find Faces"
Select the face you want to replace and the target face
Enable "Swap Faces" and adjust settings as needed (Swapper Resolution, Restorer, etc.)
Use the "Auto Swap" feature to automatically apply the selected face to new images
Click "Save Image" to save the face-swapped image to the output folder
Additional Tips and Information
Try different face detection models (Retina Face, Yolo v8, SCRDF)
5
views
1
comment
Testing Stable Diffusion Inference Performance with Latest NVIDIA Driver including TensorRT ONNX
1-Click fresh Automatic1111 SD Web UI Installer Script with TensorRT and more ⤵️
https://www.patreon.com/posts/86307255
🚀 UNLOCK INSANE SPEED BOOSTS with NVIDIA's Latest Driver Update or not? 🚀 Are you ready to turbocharge your AI performance? Watch me compare the brand-new NVIDIA 555 driver against the older 552 driver on an RTX 3090 TI for #StableDiffusion. Discover how TensorRT and ONNX models can skyrocket your speed! Don't miss out on these game-changing results!
0:00 Introduction to the NVIDIA newest driver update performance boost claims
0:25 What I am going to test and compare in this video
1:11 How to install latest version of Automatic1111 Web UI
1:40 The very best sampler of Automatic1111 for Stable Diffusion image generation / inference
1:57 Automatic1111 SD Web UI default installation versions
2:12 RTX 3090 TI image generation / inference speed for SDXL model with default Automatic1111 SD Web UI installation
2:22 How to see your NVIDIA driver version and many more info with nvitop library
2:40 Default installation speed for NVIDIA 551.23 driver
2:53 How to update Automatic1111 SD Web UI to the latest Torch and xFormers
3:05 Which CPU and RAM used to conduct these speed tests CPU-Z results
3:54 nvitop status while generating an image with Stable Diffusion XL - SDLX on Automatic1111 Web UI
4:10 The new generation speed after updating Torch (2.3.0) and xFormers (0.0.26) to the latest version
4:20 How to install TensorRT extension on Automatic1111 SD Web UI
5:28 How to generate a TensorRT ONNX model for huge speed up during image generation / inference
6:39 How to enable SD Unet selection to be able to use TensorRT generated model
7:13 TensorRT pros and cons
7:38 TensorRT image generation / inference speed results
8:09 How to download and install the latest NVIDIA driver properly and cleanly on Windows
9:03 Repeating all the testing again on the newest NVIDIA driver (555.85)
10:06 Comparison of other optimizations such as SDP attention or doggettx
10.:35 Conclusion of the tutorial
NVIDIA's Latest Driver: Does It Really Deliver?
In this video, we dive deep into NVIDIA's newest driver update, comparing the performance of driver versions 552 and 555 on an RTX 3090 TI running Windows 10. We'll explore the claims of speed improvements, particularly with #ONNX runtime and TensorRT integration, using the popular Automatic1111 Web UI.
What You'll Learn:
Driver Comparison: Direct performance comparison between NVIDIA drivers 552 and 555.
Setup and Installation: Step-by-step guide on setting up a fresh #Automatic1111 Web UI installation, including the latest versions of Torch and xFormers.
ONNX and TensorRT Models: Detailed testing of default and TensorRT-generated models to measure speed differences.
Hardware Specifications: Insights into the hardware used for testing, including CPU and memory configurations.
Testing Procedure:
Initial Setup:
Fresh installation using a custom installer script which includes necessary models and styles.
Initial speed test with default settings and configurations.
Driver 552 Performance:
Speed testing on driver 552 with default models and configurations.
Detailed performance metrics and image generation speed analysis.
Upgrading to Latest Torch and xFormers:
Updating to the latest versions of Torch (2.3.0) and xFormers (0.0.26).
Performance testing after updates and comparison with initial setup.
TensorRT Installation and Testing:
Installing TensorRT extension and generating TensorRT models.
Overcoming common installation errors and optimizations.
Speed testing with TensorRT models and analysis of performance improvements.
Upgrading to Driver 555:
Step-by-step guide on downloading and installing NVIDIA driver 555.
Performance comparison between driver 552 and 555.
Analyzing the impact on speed and efficiency.
Results and Conclusions:
Performance Metrics: Detailed analysis of speed improvements (or lack thereof) with the newest NVIDIA driver.
TensorRT Benefits: How TensorRT models significantly boost performance.
Driver Update Impact: Understanding the real-world impact of updating to the latest NVIDIA driver.
8
views
1
comment
How Good is RTX 3060 for ML AI Deep Learning Tasks and Comparison With GTX 1050 Ti and i7 10700F CPU
If you are wondering which Graphic to purchase to run recent Artificial Intelligence (#AI), Machine Learning (#ML), and Deep Learning models on your GPU with CUDA, then this is the right video for you.
I have purchased the cheapest and yet largest VRAM having GPU #RTX3060.
In this video I am going to compare the performance of Gainward RTX 3060 Ghost 12 GB GPU with MSI GTX 1050 Ti OC 4 GB GPU model and with my CPU which is Core i7 10700F running at 4.59 GHz.
For performance tests, I will use OpenAI’s very newest AI model release Whisper.
So this is a video of GTX 1050 Ti vs Core i7 10700F vs RTX 3060 in terms of Machine Learning applications performance.
Whisper is used for transcribing speech into text in 99 languages.
You can check out my tutorial educational video regarding Whisper here: https://youtu.be/msj3wuYf3d8
Also, in this video, I am doing a box opening of Gainward RTX 3060 Ghost. Moreover, I do a physical comparison of RTX 3060 with GTX 1050 Ti.
Furthermore, I use an AC power meter plug (digital wattmeter - watt energy meter) to calculate GTX 1050 Ti, RTX 3060, and Core i7 10700F power consumption.
I am very satisfied with the performance of RTX 3060. Moreover, it is even able to run the large model of the Whisper which is the best-released model.
Please join Our Discord server for asking questions and discussions: https://discord.gg/rfttctFewW
Please follow us on Twitter: https://twitter.com/SECourses
Please follow us on Facebook: https://www.facebook.com/OfficialSECourses
If you are interested in programming our playlists will teach you how to program and code from scratch: https://www.youtube.com/c/SECourses/playlists
[1] Introduction to Programming Full Course with C# playlist
[2] Advanced Programming with C# Full Course Playlist
[3] Object Oriented Programming Full Course with C# playlist
[4] Asp.NET Core V5 - MVC Pattern - Bootstrap V5 - Responsive Web Programming with C# Full Course Playlist
[5] Artificial Intelligence (AI) and Machine Learning (ML) Full Course with C# Examples playlist
[6] Software Engineering Full Course playlist
[7] Security of Information Systems Full Course playlist
Thumbnail source : https://www.freepik.com/free-vector/isometric-computer-hardware-parts-set-with-monitor-system-unit-electronic-components-details-isolated_9647137.htm
10
views
1
comment
How to do Free Speech-to-Text Transcription Better Than Google Premium API with OpenAI Whisper Model
If you want to transcribe your videos and audio into text for free but with high quality, you have come to the correct video.
In this tutorial video, I will guide you on how to use #OpenAI #Whisper model. I will show you how to install and run Open AI's Whisper from scratch. I will demonstrate to you how to convert audio/speech into text.
Whisper is a general-purpose speech recognition model released for free by Open AI. I claim that Whisper is the best available Speech-to-Text model (Natural Language Processing - #NLP) released to public usage including premium paid ones such as Amazon Web Services, Microsoft Azure Cloud Platform, or Google Cloud API. And Whisper is free to use.
I will show you how to install the necessary Python code and the dependent libraries. I will show you how to download a video from YouTube with YT-DLP, how to cut certain parts of the video with LosslessCut, and how to extract the audio of a video with FFMPEG. I will show you how to do a transcription of a video or a sound. I will show you how to generate subtitles for any video. Finally, I will show you how to generate translated transcription and subtitles of any language video.
With the translation feature of the Whisper model, you can watch any language (Whisper supports 99 languages) with English subtitles. Let's say you can find English subtitles for your favorite video in German or Japanese or Arabic. It is not a problem. Just follow my tutorial and generated English translated subtitles.
Actually, to be precise, Whisper is able to transcribe speech to text in all the following languages, and therefore, translation of these following languages into English:
{af,am,ar,as,az,ba,be,bg,bn,bo,br,bs,ca,cs,cy,da,de,el,en,es,et,eu,fa,fi,fo,fr,gl,gu,ha,haw,hi,hr,ht,hu,hy,id,is,it,iw,ja,jw,ka,kk,km,kn,ko,la,lb,ln,lo,lt,lv,mg,mi,mk,ml,mn,mr,ms,mt,my,ne,nl,nn,no,oc,pa,pl,ps,pt,ro,ru,sa,sd,si,sk,sl,sn,so,sq,sr,su,sv,sw,ta,te,tg,th,tk,tl,tr,tt,uk,ur,uz,vi,yi,yo,zh,Afrikaans,Albanian,Amharic,Arabic,Armenian,Assamese,Azerbaijani,Bashkir,Basque,Belarusian,Bengali,Bosnian,Breton,Bulgarian,Burmese,Castilian,Catalan,Chinese,Croatian,Czech,Danish,Dutch,English,Estonian,Faroese,Finnish,Flemish,French,Galician,Georgian,German,Greek,Gujarati,Haitian,Haitian Creole,Hausa,Hawaiian,Hebrew,Hindi,Hungarian,Icelandic,Indonesian,Italian,Japanese,Javanese,Kannada,Kazakh,Khmer,Korean,Lao,Latin,Latvian,Letzeburgesch,Lingala,Lithuanian,Luxembourgish,Macedonian,Malagasy,Malay,Malayalam,Maltese,Maori,Marathi,Moldavian,Moldovan,Mongolian,Myanmar,Nepali,Norwegian,Nynorsk,Occitan,Panjabi,Pashto,Persian,Polish,Portuguese,Punjabi,Pushto,Romanian,Russian,Sanskrit,Serbian,Shona,Sindhi,Sinhala,Sinhalese,Slovak,Slovenian,Somali,Spanish,Sundanese,Swahili,Swedish,Tagalog,Tajik,Tamil,Tatar,Telugu,Thai,Tibetan,Turkish,Turkmen,Ukrainian,Urdu,Uzbek,Valencian,Vietnamese,Welsh,Yiddish,Yoruba}
The links and the commands I have shown in the video below:
Open AI Whisper : https://openai.com/blog/whisper/
Whisper Code : https://github.com/openai/whisper
Python : https://www.python.org/downloads/release/python-399/
Whisper install : pip install git+https://github.com/openai/whisper.git
How to install CUDA support for using GPU when doing transcription of audio :
First, delete existing Pytorch : pip3 uninstall torch
Then install Pytorch with CUDA support : pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
FFMPEG : https://github.com/BtbN/FFmpeg-Builds/releases
LosslessCut : https://github.com/mifi/lossless-cut/releases
How to extract sound of any video with FFMPEG : ffmpeg -i "test_video.webm" -q:a 0 -map a test_video.mp3
How to transcribe an English video : whisper "C:\speech to text\test_video.mp3" --language en --model base.en --device cpu --task transcribe
How to transcribe an English video with CUDA support : whisper "C:\speech to text\test_video.mp3" --language en --model base.en --device cuda --task transcribe
How to transcribe a Turkish video : whisper "C:\speech to text\test_video.mp3" --language tr --model base.en --device cpu --task transcribe
How to transcribe a Turkish video with translation : whisper "C:\speech to text\test.mp3" --language tr --model small --device cuda -o "C:\speech to text" --task translate
Our Discord for SECourses : https://discord.gg/rfttctFewW
If you are interested in programming but you lack experience and skills I suggest you watch our playlists: https://www.youtube.com/c/SECourses/playlists
[1] Introduction to Programming Full Course with C# playlist
[2] Advanced Programming with C# Full Course Playlist
[3] Object Oriented Programming Full Course with C# playlist
[4] Asp.NET Core V5 - MVC Pattern - Bootstrap V5 - Responsive Web Programming with C# Full Course Playlist
[5] Artificial Intelligence (AI) and Machine Learning (ML) Full Course with C# Examples playlist
[6] Software Engineering Full Course playlist
[7] Security of Information Systems Full Course playlist
40
views
1
comment
How to Setup Private IKEv2 / IPSec MSCHAPv2 VPN on Windows Server to Connect From Android 12+ Phone
✔️ If you are frustrated since #L2TP/PPTP is gone after MIUI 13 Update, or after your phone's / tablet's / device's Android Version update, then this full guide tutorial is for you. If your phone, tablet, or mobile device's Android version is above 11 and you can't find the #PPTP VPN protocol to connect your private #VPN, then don't worry. Because I am explaining the easiest way to set up our VPN to connect from your device in this tutorial guide.
✔️ Point-to-Point Tunneling Protocol (PPTP) was so easy to set up on Windows Server and you were able to connect your private VPN easily through your phone. But this is not possible anymore since PPTP is removed from the majority of phones and mobile devices.
✔️ So instead of setting up our private VPN through features of Windows Server, we are going to use open source #SoftEther VPN Project.
✔️ In this video I will show you thoroughly from scratch:
1: Generate a new virtual server on Hyper-V and install Windows Server 2019 evaluation version.
2: Install SoftEther VPN Project on Windows Server 2019.
3: Make the necessary configuration of SoftEther.
4: Generate and export the #OpenVPN configuration file.
5: Modify the OpenVPN configuration file which ends with the .ovpn extension.
6: Install the OpenVPN app through Google Play Market and import the .ovpn configuration.
7: Connect to your VPN from your phone. I demonstrate this with my Xiaomi Poco X3 Pro - Android 12
8: With this methodology, we don't have to deal with complex and very hard-to-set-up IKEv2 / #IPSec #MSCHAPv2, #IKEv2 / IPSec #PSK, and IKEv2 / IPSec #RSA VPN protocols. These are the only available protocols on my mobile device.
0:00 Introduction
1:17 New Virtual Machine
3:28 Setting up Windows Server 2019
7:20 SoftEther Download & Installation
11:56 How to Setup OpenVPN on the Phone and Use VPN
✔️ The reason why I made this video is, it was so hard and there wasn't any up-to-date guide/tutorial for setting up your private VPN and connecting from your mobile phone.
✔️ The subtitle of the video is manually corrected so please watch with subtitles.
✔️ Please join Our Discord server for asking questions and have discussions: 🔗 https://discord.gg/rfttctFewW
✔️ Please follow us on Twitter: 🔗 https://twitter.com/SECourses
✔️ Please follow us on Facebook: 🔗 https://www.facebook.com/OfficialSECourses
✔️ If you are interested in programming our playlists will teach you how to program and code from scratch: 🔗 https://www.youtube.com/c/SECourses/playlists
1️⃣ Introduction to Programming Full Course with C# playlist ⭐⭐⭐⭐⭐
2️⃣ Advanced Programming with C# Full Course Playlist ⭐⭐⭐⭐⭐
3️⃣ Object Oriented Programming Full Course with C# playlist ⭐⭐⭐⭐⭐
4️⃣ Asp NETCore V5 - MVC Pattern - Bootstrap V5 - Responsive Web Programming with C# Full Course Playlist ⭐⭐⭐⭐⭐
5️⃣ Artificial Intelligence (AI) and Machine Learning (ML) Full Course with C# Examples playlist ⭐⭐⭐⭐⭐
6️⃣ Software Engineering Full Course playlist ⭐⭐⭐⭐⭐
7️⃣ Security of Information Systems Full Course playlist ⭐⭐⭐⭐⭐
Thumbnail : freepik : Gradient vpn illustration
25
views
1
comment