Search results for: urea deep placement
2111 Classification of IoT Traffic Security Attacks Using Deep Learning
Authors: Anum Ali, Kashaf ad Dooja, Asif Saleem
Abstract:
The future smart cities trend will be towards Internet of Things (IoT); IoT creates dynamic connections in a ubiquitous manner. Smart cities offer ease and flexibility for daily life matters. By using small devices that are connected to cloud servers based on IoT, network traffic between these devices is growing exponentially, whose security is a concerned issue, since ratio of cyber attack may make the network traffic vulnerable. This paper discusses the latest machine learning approaches in related work further to tackle the increasing rate of cyber attacks, machine learning algorithm is applied to IoT-based network traffic data. The proposed algorithm train itself on data and identify different sections of devices interaction by using supervised learning which is considered as a classifier related to a specific IoT device class. The simulation results clearly identify the attacks and produce fewer false detections.Keywords: IoT, traffic security, deep learning, classification
Procedia PDF Downloads 1522110 Income and Factor Analysis of Small Scale Broiler Production in Imo State, Nigeria
Authors: Ubon Asuquo Essien, Okwudili Bismark Ibeagwa, Daberechi Peace Ubabuko
Abstract:
The Broiler Poultry subsector is dominated by small scale production with low aggregate output. The high cost of inputs currently experienced in Nigeria tends to aggravate the situation; hence many broiler farmers struggle to break-even. This study was designed to examine income and input factors in small scale deep liter broiler production in Imo state, Nigeria. Specifically, the study examined; socio-economic characteristics of small scale deep liter broiler producing Poultry farmers; estimate cost and returns of broiler production in the area; analyze input factors in broiler production in the area and examined marketability, age and profitability of the enterprise. A multi-stage sampling technique was adopted in selecting 60 small scale broiler farmers who use deep liter system from 6 communities through the use of structured questionnaire. The socioeconomic characteristics of the broiler farmers and the profitability/ marketability age of the birds were described using descriptive statistical tools such as frequencies, means and percentages. Gross margin analysis was used to analyze the cost and returns to broiler production, while Cobb Douglas production function was employed to analyze input factors in broiler production. The result of the study revealed that the cost of feed (P<0.1), deep liter material (P<0.05) and medication (P<0.05) had a significant positive relationship with the gross return of broiler farmers in the study area, while cost of labour, fuel and day old chicks were not significant. Furthermore, Gross profit margin of the farmers who market their broiler at the 8th week of rearing was 80.7%; and 78.7% and 60.8% for farmers who market at the 10th week and 12th week of rearing, respectively. The business is, therefore, profitable but at varying degree. Government and Development partners should make deliberate efforts to curb the current rise in the prices of poultry feeds, drugs and timber materials used as bedding so as to widen the profit margin and encourage more farmers to go into the business. The farmers equally need more technical assistance from extension agents with regards to timely and profitable marketing.Keywords: broilers, factor analysis, income, small scale
Procedia PDF Downloads 802109 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario
Authors: Sarita Agarwal, Deepika Delsa Dean
Abstract:
Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation
Procedia PDF Downloads 1302108 Long-Term Conservation Tillage Impact on Soil Properties and Crop Productivity
Authors: Danute Karcauskiene, Dalia Ambrazaitiene, Regina Skuodiene, Monika Vilkiene, Regina Repsiene, Ieva Jokubauskaite
Abstract:
The main ambition for nowadays agriculture is to get the economically effective yield and to secure the soil ecological sustainability. According to the effect on the main soil quality indexes, tillage systems may be separated into two types, conventional and conservation tillage. The goal of this study was to determine the impact of conservation and conventional primary soil tillage methods and soil fertility improvement measures on soil properties and crop productivity. Methods: The soil of the experimental site is Dystric Glossic Retisol (WRB 2014) with texture of sandy loam. The trial was established in 2003 in the experimental field of crop rotation of Vėžaičiai Branch of Lithuanian Research Centre for Agriculture and Forestry. Trial factors and treatments: factor A- primary soil tillage in (autumn): deep ploughing (20-25cm), shallow ploughing (10-12cm), shallow ploughless tillage (8-10cm); factor B – soil fertility improvement measures: plant residues, plant residues + straw, green manure 1st cut + straw, farmyard manure 40tha-1 + straw. The four - course crop rotation consisted of red clover, winter wheat, spring rape and spring barley with undersown. Results: The tillage had no statistically significant effect on topsoil (0-10 cm) pHKCl level, it was 5.5 - 5.7. During all experiment period, the highest soil pHKCl level (5.65) was in the shallow ploughless tillage. The organic fertilizers particularly the biomass of grass and farmyard manure had tendency to increase the soil pHKCl. The content of plant - available phosphorus and potassium significantly increase in the shallow ploughing compared with others tillage systems. The farmyard manure increases those elements in whole arable layer. The dissolved organic carbon concentration was significantly higher in the 0 - 10 cm soil layer in the shallow ploughless tillage compared with deep ploughing. After the incorporation of clover biomass and farmyard manure the concentration of dissolved organic carbon increased in the top soil layer. During all experiment period the largest amount of water stable aggregates was determined in the soil where the shallow ploughless tillage was applied. It was by 12% higher compared with deep ploughing. During all experiment time, the soil moisture was higher in the shallow ploughing and shallow ploughless tillage (9-27%) compared to deep ploughing. The lowest emission of CO2 was determined in the deep ploughing soil. The highest rate of CO2 emission was in shallow ploughless tillage. The addition of organic fertilisers had a tendency to increase the CO2 emission, but there was no statistically significant effect between the different types of organic fertilisers. The crop yield was larger in the deep ploughing soil compared to the shallow and shallow ploughless tillage.Keywords: reduced tillage, soil structure, soil pH, biological activity, crop productivity
Procedia PDF Downloads 2672107 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population
Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya
Abstract:
Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa
Procedia PDF Downloads 1062106 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Authors: Samiah Alammari, Nassim Ammour
Abstract:
When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on HSI dataset Indian Pines. The results confirm the capability of the proposed method.Keywords: continual learning, data reconstruction, remote sensing, hyperspectral image segmentation
Procedia PDF Downloads 2662105 Recovery of Fried Soybean Oil Using Bentonite as an Adsorbent: Optimization, Isotherm and Kinetics Studies
Authors: Prakash Kumar Nayak, Avinash Kumar, Uma Dash, Kalpana Rayaguru
Abstract:
Soybean oil is one of the most widely consumed cooking oils, worldwide. Deep-fat frying of foods at higher temperatures adds unique flavour, golden brown colour and crispy texture to foods. But it brings in various changes like hydrolysis, oxidation, hydrogenation and thermal alteration to oil. The presence of Peroxide value (PV) is one of the most important factors affecting the quality of the deep-fat fried oil. Using bentonite as an adsorbent, the PV can be reduced, thereby improving the quality of the soybean oil. In this study, operating parameters like heating time of oil (10, 15, 20, 25 & 30 h), contact time ( 5, 10, 15, 20, 25 h) and concentration of adsorbent (0.25, 0.5, 0.75, 1.0 and 1.25 g/ 100 ml of oil) have been optimized by response surface methodology (RSM) considering percentage reduction of PV as a response. Adsorption data were analysed by fitting with Langmuir and Freundlich isotherm model. The results show that the Langmuir model shows the best fit compared to the Freundlich model. The adsorption process was also found to follow a pseudo-second-order kinetic model.Keywords: bentonite, Langmuir isotherm, peroxide value, RSM, soybean oil
Procedia PDF Downloads 3752104 Vector-Based Analysis in Cognitive Linguistics
Authors: Chuluundorj Begz
Abstract:
This paper presents the dynamic, psycho-cognitive approach to study of human verbal thinking on the basis of typologically different languages /as a Mongolian, English and Russian/. Topological equivalence in verbal communication serves as a basis of Universality of mental structures and therefore deep structures. Mechanism of verbal thinking consisted at the deep level of basic concepts, rules for integration and classification, neural networks of vocabulary. In neuro cognitive study of language, neural architecture and neuro psychological mechanism of verbal cognition are basis of a vector-based modeling. Verbal perception and interpretation of the infinite set of meanings and propositions in mental continuum can be modeled by applying tensor methods. Euclidean and non-Euclidean spaces are applied for a description of human semantic vocabulary and high order structures.Keywords: Euclidean spaces, isomorphism and homomorphism, mental lexicon, mental mapping, semantic memory, verbal cognition, vector space
Procedia PDF Downloads 5192103 Current Methods for Drug Property Prediction in the Real World
Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh
Abstract:
Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning
Procedia PDF Downloads 812102 Parkinson’s Disease Hand-Eye Coordination and Dexterity Evaluation System
Authors: Wann-Yun Shieh, Chin-Man Wang, Ya-Cheng Shieh
Abstract:
This study aims to develop an objective scoring system to evaluate hand-eye coordination and hand dexterity for Parkinson’s disease. This system contains three boards, and each of them is implemented with the sensors to sense a user’s finger operations. The operations include the peg test, the block test, and the blind block test. A user has to use the vision, hearing, and tactile abilities to finish these operations, and the board will record the results automatically. These results can help the physicians to evaluate a user’s reaction, coordination, dexterity function. The results will be collected to a cloud database for further analysis and statistics. A researcher can use this system to obtain systematic, graphic reports for an individual or a group of users. Particularly, a deep learning model is developed to learn the features of the data from different users. This model will help the physicians to assess the Parkinson’s disease symptoms by a more intellective algorithm.Keywords: deep learning, hand-eye coordination, reaction, hand dexterity
Procedia PDF Downloads 662101 An Adaptive Conversational AI Approach for Self-Learning
Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo
Abstract:
In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.Keywords: conversational AI, chatbot, dialog management, semantic analysis
Procedia PDF Downloads 1362100 Prostheticly Oriented Approach for Determination of Fixture Position for Facial Prostheses Retention in Cases with Atypical and Combined Facial Defects
Authors: K. A.Veselova, N. V.Gromova, I. N.Antonova, I. N. Kalakutskii
Abstract:
There are many diseases and incidents that may result facial defects and deformities: cancer, trauma, burns, congenital anomalies, and autoimmune diseases. In some cases, patient may acquire atypically extensive facial defect, including more than one anatomical region or, by contrast, atypically small defect (e.g. partial auricular defect). The anaplastology gives us opportunity to help patient with facial disfigurement in cases when plastic surgery is contraindicated. Using of implant retention for facial prosthesis is strongly recommended because improves both aesthetic and functional results and makes using of the prosthesis more comfortable. Prostheticly oriented fixture position is extremely important for aesthetic and functional long-term result; however, the optimal site for fixture placement is not clear in cases with atypical configuration of facial defect. The objective of this report is to demonstrate challenges in fixture position determination we have faced with and offer the solution. In this report, four cases of implant-supported facial prosthesis are described. Extra-oral implants with four millimeter length were used in all cases. The decision regarding the quantity of surgical stages was based on anamnesis of disease. Facial prostheses were manufactured according to conventional technique. Clinical and technological difficulties and mistakes are described, and prostheticly oriented approach for determination of fixture position is demonstrated. The case with atypically large combined orbital and nasal defect resulting after arteriovenous malformation is described: the correct positioning of artificial eye was impossible due to wrong position of the fixture (with suprastructure) located in medial aspect of supraorbital rim. The suprastructure was unfixed and this fixture wasn`t used for retention in order to achieve appropriate artificial eye placement and better aesthetic result. In other case with small partial auricular defect (only helix and antihelix were absent) caused by squamoized cell carcinoma T1N0M0 surgical template was used to avoid the difficulties. To achieve the prostheticly oriented fixture position in case of extremely small defect the template was made on preliminary cast using vacuum thermoforming method. Two radiopaque markers were incorporated into template in preferable for fixture placement positions taking into account future prosthesis configuration. The template was put on remaining ear and cone-beam CT was performed to insure, that the amount of bone is enough for implant insertion in preferable position. Before the surgery radiopaque markers were extracted and template was holed for guide drill. Fabrication of implant-retained facial prostheses gives us opportunity to improve aesthetics, retention and patients’ quality of life. But every inaccuracy in planning leads to challenges on surgery and prosthetic stages. Moreover, in cases with atypically small or extended facial defects prostheticly oriented approach for determination of fixture position is strongly required. The approach including surgical template fabrication is effective, easy and cheap way to avoid mistakes and unpredictable result.Keywords: anaplastology, facial prosthesis, implant-retained facial prosthesis., maxillofacil prosthese
Procedia PDF Downloads 1142099 Identifying Factors of Wellbeing in Russian Orphans
Authors: Alexandra Telitsyna, Galina Semya, Elvira Garifulina
Abstract:
Introduction: Starting from 2012 Russia conducts deinstitutionalization policy and now the main indicator of success is the number of children living in institutions. Active family placement process has resulted in residents of the institution now mainly consists of adolescents with behavioral and emotional problems, children with disabilities and groups of siblings. Purpose of science research: The purpose of science research is to identify factors for child’s wellbeing while temporary stay in an orphanage and the subjective assessment of children's level of well-being (psychological well-being). Methods: The data used for this project was collected by the questionnaire of 72 indicators, a tool for monitoring the behavior of children and caregivers, an additional questionnaire for children; well-being assessment questionnaire containing 10 scales for three age groups from preschool to older adolescents. In 2016-2018, the research was conducted in 1873 institution in 85 regions of Russia. In each region a team of academics, specialists from Non-profits, independent experts was created. Training was conducted for team members through a series of webinars prior to undertaking the assessment. The results: To ensure the well-being of the children, the following conditions are necessary: 1- Life of children in institution is organised according to the principles of family care (including the creation of conditions for attachment to be formed); 2- Contribution to find family-based placement for children (including reintegration into the primary family); 3- Work with parents of children, who are placed in an organization at the request of parents; 4- Children attend schools according to their needs; 5- Training of staff and volunteers; 6- Special environment and services for children with special needs and children with disabilities; 7- Cooperation with NGOs; 8 - Openness and accessibility of the organization. Conclusion: A study of the psychological well-being of children showed that the most emotionally stressful for children were questions about the presence and frequency of contact with relatives, and the level of well-being is higher in the presence of a trusted adult and respect for rights. The greatest contribution to the trouble is made by the time the child is in the orphanage, the lack of contact with parents and relatives, the uncertainty of the future.Keywords: identifying factors, orphans, Russia, wellbeing
Procedia PDF Downloads 1282098 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: classification, CNN, deep learning, prediction, SNR
Procedia PDF Downloads 1332097 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 2922096 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach
Authors: Gong Zhilin, Jing Yang, Jian Yin
Abstract:
The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).Keywords: credit card, data mining, fraud detection, money transactions
Procedia PDF Downloads 1312095 Mechanical Properties of D2 Tool Steel Cryogenically Treated Using Controllable Cooling
Authors: A. Rabin, G. Mazor, I. Ladizhenski, R. Shneck, Z.
Abstract:
The hardness and hardenability of AISI D2 cold work tool steel with conventional quenching (CQ), deep cryogenic quenching (DCQ) and rapid deep cryogenic quenching heat treatments caused by temporary porous coating based on magnesium sulfate was investigated. Each of the cooling processes was examined from the perspective of the full process efficiency, heat flux in the austenite-martensite transformation range followed by characterization of the temporary porous layer made of magnesium sulfate using confocal laser scanning microscopy (CLSM), surface and core hardness and hardenability using Vickr’s hardness technique. The results show that the cooling rate (CR) at the austenite-martensite transformation range have a high influence on the hardness of the studied steel.Keywords: AISI D2, controllable cooling, magnesium sulfate coating, rapid cryogenic heat treatment, temporary porous layer
Procedia PDF Downloads 1372094 Ruminal VFA of Beef Fed Different Protein
Authors: P. Paengkoum, S. C. Chen, S. Paengkoum
Abstract:
Six male growing Thai-indigenous beef cattle with body weight (BW) of 154±13.2 kg were randomly assigned in replicated 3×3 Latin square design, and fed with different levels of crude protein (CP) in total mixed ration (TMR) diets. CP levels in diets were 4.3%, 7.3% and 10.3% base on dry matter (DM). Ruminal ammonia nitrogen (NH3-N) and blood urea nitrogen (BUN) concentrations increased (P<0.01) with increasing CP levels. Moreover, there is a positive relationship between BUN and ruminal NH3-N. Rumen pH, total volatile fatty acid (VFA), molar proportions of acetate, propionate and butyrate were not affected by CP levels (P>0.05).Keywords: Thai-indigenous beef cattle, crude protein, volatile fatty acid (VFA), total mixed ration (TMR) diets
Procedia PDF Downloads 2812093 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 1772092 Lower Limb Oedema in Beckwith-Wiedemann Syndrome
Authors: Mihai-Ionut Firescu, Mark A. P. Carson
Abstract:
We present a case of inferior vena cava agenesis (IVCA) associated with bilateral deep venous thrombosis (DVT) in a patient with Beckwith-Wiedemann syndrome (BWS). In adult patients with BWS presenting with bilateral lower limb oedema, specific aetiological factors should be considered. These include cardiomyopathy and intraabdominal tumours. Congenital malformations of the IVC, through causing relative venous stasis, can lead to lower limb oedema either directly or indirectly by favouring lower limb venous thromboembolism; however, they are yet to be reported as an associated feature of BWS. Given its life-threatening potential, the prompt initiation of treatment for bilateral DVT is paramount. In BWS patients, however, this can prove more complicated. Due to overgrowth, the above-average birth weight can continue throughout childhood. In this case, the patient’s weight reached 170 kg, impacting on anticoagulation choice, as direct oral anticoagulants have a limited evidence base in patients with a body mass above 120 kg. Furthermore, the presence of IVCA leads to a long-term increased venous thrombosis risk. Therefore, patients with IVCA and bilateral DVT warrant specialist consideration and may benefit from multidisciplinary team management, with hematology and vascular surgery input. Conclusion: Here, we showcased a rare cause for bilateral lower limb oedema, respectively bilateral deep venous thrombosis complicating IVCA in a patient with Beckwith-Wiedemann syndrome. The importance of this case lies in its novelty, as the association between IVC agenesis and BWS has not yet been described. Furthermore, the treatment of DVT in such situations requires special consideration, taking into account the patient’s weight and the presence of a significant, predisposing vascular abnormality.Keywords: Beckwith-Wiedemann syndrome, bilateral deep venous thrombosis, inferior vena cava agenesis, venous thromboembolism
Procedia PDF Downloads 2352091 Layersomes for Oral Delivery of Amphotericin B
Authors: A. C. Rana, Abhinav Singh Rana
Abstract:
Layer by layer coating of biocompatible polyelectrolytes converts the liposomes into stable version i.e 'layersomes'. This system was further used to deliver the Amphotericin B through the oral route. Extensive optimization of different process variables resulted in the formation of layersomes with the particle size of 238.4±5.1, PDI of 0.24±0.16, the zeta potential of 34.6±1.3, and entrapment efficiency of 71.3±1.2. TEM analysis further confirmed the formation of spherical particles. Trehalose (10% w/w) resulted in the formation of fluffy and easy to redisperse cake in freeze dried layersomes. Controlled release up to 50 % within 24 h was observed in the case of layersomes. The layersomes were found stable in simulated biological fluids and resulted in the 3.59 fold higher bioavailability in comparison to free Amp-B. Furthermore, the developed formulation was found to be safe in comparison to Fungizone as indicated by blood urea nitrogen (BUN) and creatinine level.Keywords: amphotericin B, layersomes, liposomes, toxicity
Procedia PDF Downloads 5272090 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 1812089 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration
Authors: Mohammad Reza Esmaili
Abstract:
One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto
Procedia PDF Downloads 662088 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1232087 A Comprehensive Study and Evaluation on Image Fashion Features Extraction
Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen
Abstract:
Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.Keywords: convolutional neural network, feature representation, image processing, machine modelling
Procedia PDF Downloads 1392086 Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification
Authors: Bharatendra Rai
Abstract:
The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning.Keywords: long short-term memory networks, convolutional recurrent networks, text classification, hyperparameter tuning, Tukey honest significant differences
Procedia PDF Downloads 1292085 Colorectal Resection in Endometriosis: A Study on Conservative Vascular Approach
Authors: A. Zecchin, E. Vallicella, I. Alberi, A. Dalle Carbonare, A. Festi, F. Galeone, S. Garzon, R. Raffaelli, P. Pomini, M. Franchi
Abstract:
Introduction: Severe endometriosis is a multiorgan disease, that involves bowel in 31% of cases. Disabling symptoms and deep infiltration can lead to bowel obstruction: surgical bowel treatment may be needed. In these cases, colorectal segment resection is usually performed by inferior mesenteric artery ligature, as radically as for oncological surgery. This study was made on surgery based on intestinal vascular axis’ preservation. It was assessed postoperative complications risks (mainly rate of dehiscence of intestinal anastomoses), and results were compared with the ones found in literature about classical colorectal resection. Materials and methods: This was a retrospective study based on 62 patients with deep infiltrating endometriosis of the bowel, which undergo segmental resection with intestinal vascular axis preservation, between 2013 and 2016. It was assessed complications related to the intervention both during hospitalization and 30-60 days after resection. Particular attention was paid to the presence of anastomotic dehiscence. 52 patients were finally telephonically interviewed in order to investigate the presence or absence of intestinal constipation. Results and Conclusion: Segmental intestinal resection performed in this study ensured a more conservative vascular approach, with lower rate of anastomotic dehiscence (1.6%) compared to classical literature data (10.0% to 11.4% ). No complications were observed regarding spontaneous recovery of intestinal motility and bladder emptying. Constipation in some patients, even after years of intervention, is not assessable in the absence of a preoperative constipation state assessment.Keywords: anastomotic dehiscence, deep infiltrating endometriosis, colorectal resection, vascular axis preservation
Procedia PDF Downloads 2042084 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 1432083 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing
Authors: John Eric C. Bargas, Maria Cecilia M. Marcos
Abstract:
One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing
Procedia PDF Downloads 472082 Use of Generative Adversarial Networks (GANs) in Neuroimaging and Clinical Neuroscience Applications
Authors: Niloufar Yadgari
Abstract:
GANs are a potent form of deep learning models that have found success in various fields. They are part of the larger group of generative techniques, which aim to produce authentic data using a probabilistic model that learns distributions from actual samples. In clinical settings, GANs have demonstrated improved abilities in capturing spatially intricate, nonlinear, and possibly subtle disease impacts in contrast to conventional generative techniques. This review critically evaluates the current research on how GANs are being used in imaging studies of different neurological conditions like Alzheimer's disease, brain tumors, aging of the brain, and multiple sclerosis. We offer a clear explanation of different GAN techniques for each use case in neuroimaging and delve into the key hurdles, unanswered queries, and potential advancements in utilizing GANs in this field. Our goal is to connect advanced deep learning techniques with neurology studies, showcasing how GANs can assist in clinical decision-making and enhance our comprehension of the structural and functional aspects of brain disorders.Keywords: GAN, pathology, generative adversarial network, neuro imaging
Procedia PDF Downloads 32