Search results for: urea deep placement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2648

Search results for: urea deep placement

2258 Vitamin C Supplementation Modulates Zinc Levels and Antioxidant Values in Blood and Tissues of Diabetic Rats Fed Zinc-Deficient Diet

Authors: W. Fatmi, F. Kriba, Z. Kechrid

Abstract:

The aim of this study was to investigate the effect of vitamin C on blood biochemical parameters, tissue zinc, and antioxidants enzymes in diabetic rats fed a zinc-deficient diet. For that purpose, Alloxan-induced diabetic rats were divided into four groups. The first group was fed a zinc-sufficient diet while the second group was fed a zinc-deficient diet. The third and fourth groups received zinc-sufficient or zinc-deficient diets plus oral vitamin C (1mg/l) for 27 days. Body weight and food intake were recorded regularly during 27 days. On day 28, animals were killed and glucose, total lipids, triglycerides, protein, urea, serum zinc , tissues zinc concentrations, liver glycogen, GSH, TBARS concentrations and serum GOT, GPT, ALP and LDH, liver GSH-Px, GST and Catalase activities were determined. Body weight gain and food intake of zinc deficient diabetic animals at the end of experimental period was significantly lower than that of zinc adequate diabetic animals. Dietary zinc intake significantly increased glucose, lipids, triglycerides, urea, and liver TBARS levels of zinc deficient diabetic rats. In contrast, serum zinc, tissues zinc, protein, liver glycogen and GSH levels were decreased. The consumption of zinc deficient diet led also to an increase in serum GOT, GPT and liver GST accompanied with a decrease in serum ALP, LDH and liver GSH-Px, CAT activities. Meanwhile, vitamin C treatment was ameliorated all the previous parameters approximately to their normal levels. Vitamin C supplementation presumably acting as an antioxidant, and it probably led to an improvement of insulin activity, which significantly reduced the severity of zinc deficiency in diabetes.

Keywords: antioxidant, experimental diabetes, liver enzymes, vitamin c, zinc deficiency

Procedia PDF Downloads 352
2257 Mixed Hydrotropic Zaleplon Oral Tablets: Formulation and Neuropharmacological Effect on Plasma GABA Level

Authors: Ghada A. Abdelbary, Maha M. Amin, Mostafa Abdelmoteleb

Abstract:

Zaleplon (ZP) is a non-benzodiazepine poorly soluble hypnotic drug indicated for the short term treatment of insomnia having a bioavailability of about 30%. The aim of the present study is to enhance the solubility and consequently the bioavailability of ZP using hydrotropic agents (HA). Phase solubility diagrams of ZP in presence of different molar concentrations of HA (Sodium benzoate, Urea, Ascorbic acid, Resorcinol, Nicotinamide, and Piperazine) were constructed. ZP/Sodium benzoate and Resorcinol microparticles were prepared adopting melt, solvent evaporation and melt-evaporation techniques followed by XRD. Directly compressed mixed hydrotropic ZP tablets of Sodium benzoate and Resorcinol in different weight ratios were prepared and evaluated compared to the commercially available tablets (Sleep aid® 5 mg). The effect of shelf and accelerated stability storage (40°C ± 2°C/75%RH ± 5%RH) on the optimum tablet formula (F5) for six months were studied. The enhancement of ZP solubility follows the order of: Resorcinol > Sodium benzoate > Ascorbic acid > Piperazine > Urea > Nicotinamide with about 350 and 2000 fold increase using 1M of Sodium benzoate and Resorcinol respectively. ZP/HA microparticles exhibit the order of: Solvent evaporation > melt-solvent evaporation > melt > physical mixture which was further confirmed by the complete conversion of ZP into amorphous form. Mixed hydrotropic tablet formula (F5) composed of ZP/(Resorcinol: Sodium benzoate 4:1w/w) microparticles prepared by solvent evaporation exhibits in-vitro dissolution of 31.7±0.11% after five minutes (Q5min) compared to 10.0±0.10% for Sleep aid® (5 mg) respectively. F5 showed significantly higher GABA concentration of 122.5±5.5mg/mL in plasma compared to 118±1.00 and 27.8±1.5 mg/mL in case of Sleep aid® (5 mg) and control taking only saline respectively suggesting a higher neuropharmacological effect of ZP following hydrotropic solubilization.

Keywords: zaleplon, hydrotropic solubilization, plasma GABA level, mixed hydrotropy

Procedia PDF Downloads 430
2256 Deepnic, A Method to Transform Each Variable into Image for Deep Learning

Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.

Abstract:

Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.

Keywords: tabular data, deep learning, perfect trees, NICS

Procedia PDF Downloads 72
2255 Comparison of Health Related Quality of Life in End Stage Renal Diseases Undergoing Twice and Thrice Hemodialysis

Authors: Anamika A. Sharma, Arezou Ahmadi R. A., Narendra B. Parihar, Manjusha Sajith

Abstract:

Introduction: Hemodialysis is the most effective therapeutic technique for patient with ESRD second to renal transplantation. However it is a lifelong therapy which requires frequent hospital, or dialysis centers visits mainly twice and thrice weekly, thus considerably changes the normal way of patient’s living. So this study aimed to Assess Health-Related Quality of life in End-Stage Renal Disease (ESRD) Undergoing Twice and Thrice weekly Hemodialysis. Method: A prospective observational, cross-sectional study was carried out from September 2016 to April 2017 in end-stage renal disease patients undergoing hemodialysis. Socio-demographic and clinical details of patients were obtained from the medical records. WHOQOL-BREF questionnaire was used to Access Health-Related Quality Of Life. Quality of Life scores of Twice weekly and Thrice weekly hemodialysis was analyzed by Kruskal Wallis Test. Results: Majority of respondents were male (72.55%), married (89.31%), employed (58.02%), belong to middle class (71.00%) and resides in rural area (58.78%). The mean ages in the patient undergoing twice weekly and thrice weekly hemodialysis were 51.89 ± 15.64 years and 51.33 ± 15.70 years respectively. Average Quality of Life scores observed in twice weekly and thrice weekly hemodialysis was 52.07 ± 13.30 (p=0.0037) and 52.87 ± 13.47 (p=0.0004) respectively. The hemoglobin of thrice weekly dialysis patients (10.28 gm/dL) was high as compared to twice weekly dialysis (9.23 gm/dL). Patients undergoing thrice weekly dialysis had improved serum urea, serum creatinine values (95.85 mg/dL, 8.32 mg/dL) as compared to twice weekly hemodialysis ( 104.94 mg/dL, 8.68 mg/dL). Conclusion: Our study concluded that there was no significant difference between overall Health-Related Quality Of Life in twice weekly and thrice weekly hemodialysis. Frequent hemodialysis was associated with improved control of hypertension, serum urea, serum creatinine levels.

Keywords: end stage renal disease, health related quality of life, twice weekly hemodialysis, thrice weekly hemodialysis

Procedia PDF Downloads 162
2254 Online Yoga Asana Trainer Using Deep Learning

Authors: Venkata Narayana Chejarla, Nafisa Parvez Shaik, Gopi Vara Prasad Marabathula, Deva Kumar Bejjam

Abstract:

Yoga is an advanced, well-recognized method with roots in Indian philosophy. Yoga benefits both the body and the psyche. Yoga is a regular exercise that helps people relax and sleep better while also enhancing their balance, endurance, and concentration. Yoga can be learned in a variety of settings, including at home with the aid of books and the internet as well as in yoga studios with the guidance of an instructor. Self-learning does not teach the proper yoga poses, and doing them without the right instruction could result in significant injuries. We developed "Online Yoga Asana Trainer using Deep Learning" so that people could practice yoga without a teacher. Our project is developed using Tensorflow, Movenet, and Keras models. The system makes use of data from Kaggle that includes 25 different yoga poses. The first part of the process involves applying the movement model for extracting the 17 key points of the body from the dataset, and the next part involves preprocessing, which includes building a pose classification model using neural networks. The system scores a 98.3% accuracy rate. The system is developed to work with live videos.

Keywords: yoga, deep learning, movenet, tensorflow, keras, CNN

Procedia PDF Downloads 219
2253 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 313
2252 Optimal Capacitors Placement and Sizing Improvement Based on Voltage Reduction for Energy Efficiency

Authors: Zilaila Zakaria, Muhd Azri Abdul Razak, Muhammad Murtadha Othman, Mohd Ainor Yahya, Ismail Musirin, Mat Nasir Kari, Mohd Fazli Osman, Mohd Zaini Hassan, Baihaki Azraee

Abstract:

Energy efficiency can be realized by minimizing the power loss with a sufficient amount of energy used in an electrical distribution system. In this report, a detailed analysis of the energy efficiency of an electric distribution system was carried out with an implementation of the optimal capacitor placement and sizing (OCPS). The particle swarm optimization (PSO) will be used to determine optimal location and sizing for the capacitors whereas energy consumption and power losses minimization will improve the energy efficiency. In addition, a certain number of busbars or locations are identified in advance before the PSO is performed to solve OCPS. In this case study, three techniques are performed for the pre-selection of busbar or locations which are the power-loss-index (PLI). The particle swarm optimization (PSO) is designed to provide a new population with improved sizing and location of capacitors. The total cost of power losses, energy consumption and capacitor installation are the components considered in the objective and fitness functions of the proposed optimization technique. Voltage magnitude limit, total harmonic distortion (THD) limit, power factor limit and capacitor size limit are the parameters considered as the constraints for the proposed of optimization technique. In this research, the proposed methodologies implemented in the MATLAB® software will transfer the information, execute the three-phase unbalanced load flow solution and retrieve then collect the results or data from the three-phase unbalanced electrical distribution systems modeled in the SIMULINK® software. Effectiveness of the proposed methods used to improve the energy efficiency has been verified through several case studies and the results are obtained from the test systems of IEEE 13-bus unbalanced electrical distribution system and also the practical electrical distribution system model of Sultan Salahuddin Abdul Aziz Shah (SSAAS) government building in Shah Alam, Selangor.

Keywords: particle swarm optimization, pre-determine of capacitor locations, optimal capacitors placement and sizing, unbalanced electrical distribution system

Procedia PDF Downloads 414
2251 An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients

Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi

Abstract:

Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients.

Keywords: deep learning, ensemble models, imbalanced classification, lung cancer, TS patient selection

Procedia PDF Downloads 122
2250 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 138
2249 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 105
2248 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization

Procedia PDF Downloads 278
2247 Accuracy Improvement of Traffic Participant Classification Using Millimeter-Wave Radar by Leveraging Simulator Based on Domain Adaptation

Authors: Tokihiko Akita, Seiichi Mita

Abstract:

A millimeter-wave radar is the most robust against adverse environments, making it an essential environment recognition sensor for automated driving. However, the reflection signal is sparse and unstable, so it is difficult to obtain the high recognition accuracy. Deep learning provides high accuracy even for them in recognition, but requires large scale datasets with ground truth. Specially, it takes a lot of cost to annotate for a millimeter-wave radar. For the solution, utilizing a simulator that can generate an annotated huge dataset is effective. Simulation of the radar is more difficult to match with real world data than camera image, and recognition by deep learning with higher-order features using the simulator causes further deviation. We have challenged to improve the accuracy of traffic participant classification by fusing simulator and real-world data with domain adaptation technique. Experimental results with the domain adaptation network created by us show that classification accuracy can be improved even with a few real-world data.

Keywords: millimeter-wave radar, object classification, deep learning, simulation, domain adaptation

Procedia PDF Downloads 74
2246 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 70
2245 Analysis of Facial Expressions with Amazon Rekognition

Authors: Kashika P. H.

Abstract:

The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty.

Keywords: Amazon rekognition, API, deep learning, computer vision, face detection, text detection

Procedia PDF Downloads 92
2244 The Rigor and Relevance of the Mathematics Component of the Teacher Education Programmes in Jamaica: An Evaluative Approach

Authors: Avalloy McCarthy-Curvin

Abstract:

For over fifty years there has been widespread dissatisfaction with the teaching of Mathematics in Jamaica. Studies, done in the Jamaican context highlight that teachers at the end of training do not have a deep understanding of the mathematics content they teach. Little research has been done in the Jamaican context that targets the advancement of contextual knowledge on the problem to ultimately provide a solution. The aim of the study is to identify what influences this outcome of teacher education in Jamaica so as to remedy the problem. This study formatively evaluated the curriculum documents, assessments and the delivery of the curriculum that are being used in teacher training institutions in Jamaica to determine their rigor -the extent to which written document, instruction, and the assessments focused on enabling pre-service teachers to develop deep understanding of mathematics and relevance- the extent to which the curriculum document, instruction, and the assessments are focus on developing the requisite knowledge for teaching mathematics. The findings show that neither the curriculum document, instruction nor assessments ensure rigor and enable pre-service teachers to develop the knowledge and skills they need to teach mathematics effectively.

Keywords: relevance, rigor, deep understanding, formative evaluation

Procedia PDF Downloads 218
2243 A Deep Learning Approach to Detect Complete Safety Equipment for Construction Workers Based on YOLOv7

Authors: Shariful Islam, Sharun Akter Khushbu, S. M. Shaqib, Shahriar Sultan Ramit

Abstract:

In the construction sector, ensuring worker safety is of the utmost significance. In this study, a deep learning-based technique is presented for identifying safety gear worn by construction workers, such as helmets, goggles, jackets, gloves, and footwear. The suggested method precisely locates these safety items by using the YOLO v7 (You Only Look Once) object detection algorithm. The dataset utilized in this work consists of labeled images split into training, testing and validation sets. Each image has bounding box labels that indicate where the safety equipment is located within the image. The model is trained to identify and categorize the safety equipment based on the labeled dataset through an iterative training approach. We used custom dataset to train this model. Our trained model performed admirably well, with good precision, recall, and F1-score for safety equipment recognition. Also, the model's evaluation produced encouraging results, with a [email protected] score of 87.7%. The model performs effectively, making it possible to quickly identify safety equipment violations on building sites. A thorough evaluation of the outcomes reveals the model's advantages and points up potential areas for development. By offering an automatic and trustworthy method for safety equipment detection, this research contributes to the fields of computer vision and workplace safety. The proposed deep learning-based approach will increase safety compliance and reduce the risk of accidents in the construction industry.

Keywords: deep learning, safety equipment detection, YOLOv7, computer vision, workplace safety

Procedia PDF Downloads 54
2242 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line

Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez

Abstract:

Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.

Keywords: deep-learning, image classification, image identification, industrial engineering.

Procedia PDF Downloads 142
2241 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 108
2240 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf

Authors: Simran Gill, Samuel Bartels

Abstract:

Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.

Keywords: broadleaf weeds, nitrogen, iron, turfgrass

Procedia PDF Downloads 49
2239 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 261
2238 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance

Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan

Abstract:

A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.

Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection

Procedia PDF Downloads 109
2237 Influence of Improved Roughage Quality and Period of Meal Termination on Digesta Load in the Digestive Organs of Goats

Authors: Rasheed A. Adebayo, Mehluli M. Moyo, Ignatius V. Nsahlai

Abstract:

Ruminants are known to relish roughage for productivity but the effect of its quality on digesta load in rumen, omasum, abomasum and other distal organs of the digestive tract is yet unknown. Reticulorumen fill is a strong indicator for long-term control of intake in ruminants. As such, the measurement and prediction of digesta load in these compartments may be crucial to productivity in the ruminant industry. The current study aimed at determining the effect of (a) diet quality on digesta load in digestive organs of goats, and (b) period of meal termination on the reticulorumen fill and digesta load in other distal compartments of the digestive tract of goats. Goats were fed with urea-treated hay (UTH), urea-sprayed hay (USH) and non-treated hay (NTH). At the end of eight weeks of a feeding trial period, upon termination of a meal in the morning, afternoon or evening, all goats were slaughtered in random groups of three per day to measure reticulorumen fill and digesta loads in other distal compartments of the digestive tract. Both diet quality and period affected (P < 0.05) the measure of reticulorumen fill. However, reticulorumen fill in the evening was larger (P < 0.05) than afternoon, while afternoon was similar (P > 0.05) to morning. Also, diet quality affected (P < 0.05) the wet omasal digesta load, wet abomasum, dry abomasum and dry caecum digesta loads but did not affect (P > 0.05) both wet and dry digesta loads in other compartments of the digestive tract. Period of measurement did not affect (P > 0.05) the wet omasal digesta load, and both wet and dry digesta loads in other compartments of the digestive tract except wet abomasum digesta load (P < 0.05) and dry caecum digesta load (P < 0.05). Both wet and dry reticulorumen fill were correlated (P < 0.05) with omasum (r = 0.623) and (r = 0.723), respectively. In conclusion, reticulorumen fill of goats decreased by improving the roughage quality; and the period of meal termination and measurement of the fill is a key factor to the quantity of digesta load.

Keywords: digesta, goats, meal termination, reticulo-rumen fill

Procedia PDF Downloads 354
2236 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 326
2235 The Effectiveness of Summative Assessment in Practice Learning

Authors: Abdool Qaiyum Mohabuth, Syed Munir Ahmad

Abstract:

Assessment enables students to focus on their learning, assessment. It engages them to work hard and motivates them in devoting time to their studies. Student learning is directly influenced by the type of assessment involved in the programme. Summative Assessment aims at providing measurement of student understanding. In fact, it is argued that summative assessment is used for reporting and reviewing, besides providing an overall judgement of achievement. While summative assessment is a well defined process for learning that takes place in the classroom environment, its application within the practice environment is still being researched. This paper discusses findings from a mixed-method study for exploring the effectiveness of summative assessment in practice learning. A survey questionnaire was designed for exploring the perceptions of mentors and students about summative assessment in practice learning. The questionnaire was administered to the University of Mauritius students and mentors who supervised students for their Work-Based Learning (WBL) practice at the respective placement settings. Some students, having undertaken their WBL practice, were interviewed, for capturing their views and experiences about the application of summative assessment in practice learning. Semi-structured interviews were also conducted with three experienced mentors who have assessed students on practice learning. The findings reveal that though learning in the workplace is entirely different from learning at the University, most students had positive experiences about their summative assessments in practice learning. They felt comfortable and confident to be assessed by their mentors in their placement settings and wished that the effort and time that they devoted to their learning be recognised and valued. Mentors on their side confirmed that the summative assessment is valid and reliable, enabling them to better monitor and coach students to achieve the expected learning outcomes.

Keywords: practice learning, judgement, summative assessment, knowledge, skills, workplace

Procedia PDF Downloads 326
2234 The Effects of Lipid Emulsion, Magnesium Sulphate and Metoprolol in Amitryptiline-Induced Cardiovascular Toxicity in Rats

Authors: Saylav Ejder Bora, Arife Erdogan, Mumin Alper Erdogan, Oytun Erbas, Ismet Parlak

Abstract:

Objective: The aim of this study was to evaluate histological, electrical and biochemical effects of metoprolol, lipid emulsion and magnesium sulphate as an alternative method to be used in preventing long QT emergence, that is among the lethal consequences of amitryptiline toxicity. Methods: Thirty Sprague- Dawley male rats were included. Rats were randomly separated into 5 groups. First group was administered saline only while the rest had received amitryptiline 100 mg/kg + saline, 5 mg/kg metoprolol, 20 ml/kg lipid emulsion and 75 mg/kg magnesium sulphate (MgSO4) intraperitoneally. ECG at DI lead, biochemical tests following euthanasia were performed in all groups after 1 hour of administration. Cardiac tissues were removed, sections were prepared and examined. Results: QTc values were significantly shorter in the rest when compared to amitryptiline+ saline group. While lipid emulsion did not affect proBNP and troponin values biochemically as compared to that of the control group, histologically, it was with reduced caspase 3 expression. Though statistically insignificant in the context of biochemical changes, pro-BNP and urea levels were lower in the metoprolol group when compared to controls. Similarly, metoprolol had no statistically significant effect on histological caspase 3 expression in the group that was treated with amitryptiline+metoprolol. On the other hand, there was a statistically significant decrease in Troponin, pro-BNP and urea levels as well as significant decline in histological caspase 3 expression within the MgSO4 group when compared to controls. Conclusion: As still a frequent cause of mortality in emergency units, administration of MgSO4, lipid emulsion and metoprolol might be beneficial in alternative treatment of cardiovascular toxicity caused by tricyclic antidepressant overdose, whether intake would be intentional or accidental.

Keywords: amitryptiline, cardiovascular toxicity, long QT, Rat Model

Procedia PDF Downloads 161
2233 Channel Estimation Using Deep Learning for Reconfigurable Intelligent Surfaces-Assisted Millimeter Wave Systems

Authors: Ting Gao, Mingyue He

Abstract:

Reconfigurable intelligent surfaces (RISs) are expected to be an important part of next-generation wireless communication networks due to their potential to reduce the hardware cost and energy consumption of millimeter Wave (mmWave) massive multiple-input multiple-output (MIMO) technology. However, owing to the lack of signal processing abilities of the RIS, the perfect channel state information (CSI) in RIS-assisted communication systems is difficult to acquire. In this paper, the uplink channel estimation for mmWave systems with a hybrid active/passive RIS architecture is studied. Specifically, a deep learning-based estimation scheme is proposed to estimate the channel between the RIS and the user. In particular, the sparse structure of the mmWave channel is exploited to formulate the channel estimation as a sparse reconstruction problem. To this end, the proposed approach is derived to obtain the distribution of non-zero entries in a sparse channel. After that, the channel is reconstructed by utilizing the least-squares (LS) algorithm and compressed sensing (CS) theory. The simulation results demonstrate that the proposed channel estimation scheme is superior to existing solutions even in low signal-to-noise ratio (SNR) environments.

Keywords: channel estimation, reconfigurable intelligent surface, wireless communication, deep learning

Procedia PDF Downloads 124
2232 Xenografts: Successful Penetrating Keratoplasty Between Two Species

Authors: Francisco Alvarado, Luz Ramírez

Abstract:

Corneal diseases are one of the main causes of visual impairment and affect almost 4 million, and this study assesses the effects of deep anterior lamellar keratoplasty (DALK) with porcine corneal stroma and postoperative topical treatment with tacrolimus in patients with infectious keratitis. No patient was observed with clinical graft rejection. Among the cases: 2 were positive to fungal culture, 2 with Aspergillus and the other 8 cases were confirmed by bacteriological culture. Corneal diseases are one of the main causes of visual impairment and affect almost 4 million. This study assesses the effects of deep anterior lamellar keratoplasty (DALK) with porcine corneal stroma and postoperative topical treatment with tacrolimus in patients with infectious keratitis. Receiver bed diameters ranged from 7.00 to 9.00 mm. No incidents of Descemet's membrane perforation were observed during surgery. During the follow-up period, no corneal graft splitting, IOP increase, or intolerance to tacrolimus were observed. Deep anterior lamellar keratoplasty seems to be the best option to avoid xenograft rejection, and it could help new surgical techniques in humans.

Keywords: ophthalmology, cornea, corneal transplant, xenografts, surgical innovations

Procedia PDF Downloads 67
2231 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs

Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare

Abstract:

The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.

Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio

Procedia PDF Downloads 78
2230 Resourcing Remote Rural Social Enterprises to Foster Resilience and Regional Development

Authors: Heather Fulford, Melanie Liddell

Abstract:

The recruitment and retention of high quality employees can prove to be challenging for social enterprises, particularly in some of the core business support functions such as marketing, communications, IT and finance. This holds true for social enterprises in urban contexts, where roles with more attractive remuneration in these business functions can often be found quite readily in the private sector. For social enterprises situated in rural locations, the challenges of staff recruitment and retention are even more acute. Such challenges can lead to a skills deficit in rural social enterprises, which can, at best, hinder their growth potential, and worse, jeopardise their chances of survival. This in turn, can have a negative impact on the sustainability and resilience of the surrounding rural community in which the social enterprise is located. The purpose of this paper is to report on aspects of a collaborative initiative established to stimulate innovation and business growth in remote rural businesses in Scotland. Launched in 2010, this initiative was designed to attract young students and graduates from the region to stay in the region upon completion of their studies, and to attract others from outside the region to re-locate there post-university. To facilitate this, SMEs in the region were offered wage subsidies to encourage them to recruit a student or graduate on a work placement for up to one year to participate in an innovation or business growth-oriented project. A number of the employers offering work placements were social enterprises. Through analysis of the placement project and role specifications devised by the participating social enterprises, an overview is provided of their business development needs and the skills they require to stimulate innovation and growth. Scrutiny of the reflective accounts compiled by the students and graduates at the close of their work placements highlights the benefits they derived from being able to put their academic knowledge and skills into action within a social enterprise. Examination of interviews conducted with a sample of placement employers reveals the contribution the students and graduates made during the business development projects with the social enterprises. The challenges of hosting such placements are also discussed. The paper concludes with indications of the lessons learned and an outline of the wider implications for other remote rural locations in which social enterprises play an important role in the local economy and life of the community.

Keywords: resilience, rural development, regeneration, regional development, recruitment, resource management, retention, remuneration

Procedia PDF Downloads 302
2229 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 254