Machine Learning

AI and its application have become an integral part of our daily lives,   every  day  it  amazes us with new capabilities and products, such as autonomous cars, smart homes,   etc.    As correctly said “AI is the new electricity” and the contribution of AI is writing a new human history. Dependence on AI is increasing which can easily backfire if AI fails.   Our group  focused on  two security apects of ML:  Adversarial  Machine  Learning  and  identifying fake media  created  by  an artificial intelligence (Deepfakes).

Adversarial Machine Learning

Despite   the  state-of-the-art  performance  of the neural network  on   the image  classification task, deep neural network-based image classifiers can be fooled by adding a well-crafted perturbation to the input, such that the input image     and     the     perturbed   image   visually   look indistinguishable.  The     sample    crafted    by    adding perturbation    to   the   original input data which fools the classifier    is    known   as  an adversarial sample and the process   using   an  adversarial sample to fool a machine learning-based   system  is known as an Adversarial attack. There are several adversarial attacks based on the goal of the attacker   which   poses  a huge threat to the machine learning models.  Therefore,  the  aim  here is not only to build a  model   but to  protect  it as well. The adversarial perspective of  the vision-based machine learning models is  also  explored  so  that more ideas about feature space and attack types can be estimated using adversarial attacks and defenses.

Deepfake Detection

Notable fakes that are too real to be false are now being made  in  recent  years  because  of   advancements   in machine  learning  and  the  Generative   Adversarial Network (GAN). Deepfake, a deep learning-based technology, is one of the most popular methods for creating fake media that is capable of readily misleading viewers. Deepfakes distinguishes itself from hand-crafted video manipulation techniques by producing realistic results using deep learning models. Deepfake videos are incredibly convincing and much simpler to create, unlike some image manipulation software. Someone can easily use deepfake to spread hoaxes or blackmailing somebody by fabricating videos making it a lethal technology. Designing a model which is capable of finding deep fakes is an urgent necessity

 Power Side Channel Attack 

EM Side-channel analysis is another side-channel that is being used to launch a power or fault attack into the chip. Depending on the EM probe, the circuit's power consumption can be tapped, using which differential power analysis attack can be launched. Along the same line, dedicated EM probes can be used to induce precise faults into the circuit to launch a fault analysis attack. Our team is focused on both EM fault and power analysis of IoT devices and smart cards.

Fault-based Cryptanalysis

Fault based cryptanalysis is a potent threat to the modern day crypto primitives. A cipher which is unbreakable by classical cryptanalysis techniques could be broken in a fraction of a second using few fault injections in the cipher execution, which can have catastrophic effects on data security, privacy and integrity. My PhD work was focused on fault-based cryptanalysis of different block cipher structures. I had first studied the strength of different cipher structures against fault attack by developing state-of-the-art attacks and subsequently analyzed the theoretical limits of the attack. The research output will help the designer to select the strongest cipher and implement it in such a way that violates the attack conditions, thus protecting the crypto-systems. I have contributed to this area with 2 journals, 7 conference papers and a book chapter. The papers are highly cited by the research community. One of my paper on fault-based cryptanalysis of AES has got more than one hundred citations.


Secure Design for Testability

Post-manufacturing, a chip has to be tested for possible manufacture related faults. Scan-based DfT is the most widely used test infrastructure in an effort to enhance access, and thus, testability. However, for security-critical chips, the same test infrastructure can be misused to leak secret information in the form of test responses of the chip while testing. Recently, we have identified a list of vulnerabilities in the scan obfuscation based defense mechanism. We have shown that most of these defense mechanisms are vulnerable to differential scan attacks. We have shown a new cost-effective defense mechanism by leveraging the S-box hamming weight model.

EM Analysis

EM Analysis

EM analysis techniques are side-channel attacks that can be used to extract information from a noisy signal. These techniques have been used to retrieve sensitive data or encryption keys from a computer system, which can encompass everything from wireless devices to large servers in a data center. The standard way to deal with these security problems is encryption, where a well-known encryption algorithm (or a multiple encryption algorithm, such as repetitive SHA-256 hashes used in Bitcoin) is used to mask the message being transmitted between two portions of a system. When the receiver has the key and knows the algorithm used to generate the encrypted message, they can decode the message and recover the sensitive data.


Room No - 206, IIT BHILAI, 
Sejbahar, GEC  Campus,
Raipur - 492015


Email: subidh[at]iitbhilai[dot]ac[dot]in
Phone: +91-771-255-1300 Extn.No. 6122
Fax: +1 (0) 


Please send us your valuable feedback! Any feedback would be appreciated.