Search results for: aerial imaging and detection
2979 Investigation of Comfort Properties of Knitted Fabrics
Authors: Mehmet Karahan, Nevin Karahan
Abstract:
Water and air permeability and thermal resistance of fabrics are the important attributes which strongly influence the thermo-physiological comfort properties of sportswear fabrics in different environmental conditions. In this work, terry and fleece fabrics were developed by varying the fiber content and areal density of fabrics. Further, the thermo-physical properties, including air permeability, water vapor permeability, and thermal resistance, of the developed fabrics were analyzed before and after washing. The multi-response optimization of thermo-physiological comfort properties was done by using principal component analysis (PCA) and Taguchi signal to noise ratio (PCA-S/N ratio) for optimal properties. It was found that the selected parameters resulted in a significant effect on thermo-physiological comfort properties of knitted fabrics. The PCA analysis showed that before wash, 100% cotton fabric with an aerial weight of 220 g.m⁻² gave optimum values of thermo-physiological comfort.Keywords: thermo-physiological comfort, fleece knitted fabric, air permeability, water vapor transmission, cotton/polyester
Procedia PDF Downloads 1172978 Validating Condition-Based Maintenance Algorithms through Simulation
Authors: Marcel Chevalier, Léo Dupont, Sylvain Marié, Frédérique Roffet, Elena Stolyarova, William Templier, Costin Vasile
Abstract:
Industrial end-users are currently facing an increasing need to reduce the risk of unexpected failures and optimize their maintenance. This calls for both short-term analysis and long-term ageing anticipation. At Schneider Electric, we tackle those two issues using both machine learning and first principles models. Machine learning models are incrementally trained from normal data to predict expected values and detect statistically significant short-term deviations. Ageing models are constructed by breaking down physical systems into sub-assemblies, then determining relevant degradation modes and associating each one to the right kinetic law. Validating such anomaly detection and maintenance models is challenging, both because actual incident and ageing data are rare and distorted by human interventions, and incremental learning depends on human feedback. To overcome these difficulties, we propose to simulate physics, systems, and humans -including asset maintenance operations- in order to validate the overall approaches in accelerated time and possibly choose between algorithmic alternatives.Keywords: degradation models, ageing, anomaly detection, soft sensor, incremental learning
Procedia PDF Downloads 1262977 Clinical and Structural Differences in Knee Osteoarthritis with/without Synovial Hypertrophy
Authors: Gi-Young Park, Dong Rak Kwon, Sung Cheol Cho
Abstract:
Objective: The synovium is known to be involved in many pathological characteristic processes. Also, synovitis is common in advanced osteoarthritis. We aimed to evaluate the clinical, radiographic, and ultrasound findings in patients with knee osteoarthritis and to compare the clinical and imaging findings between knee osteoarthritis with and without synovial hypertrophy confirmed by ultrasound. Methods: One hundred knees (54 left, 46 right) in 95 patients (64 women, 31 men; mean age, 65.9 years; range, 43-85 years) with knee osteoarthritis were recruited. The Visual Analogue Scale (VAS) was used to assess the intensity of knee pain. The severity of knee osteoarthritis was classified according to Kellgren and Lawrence's (K-L) grade on a radiograph. Ultrasound examination was performed by a physiatrist who had 24 years of experience in musculoskeletal ultrasound. Ultrasound findings, including the thickness of joint effusion in the suprapatellar pouch, synovial hypertrophy, infrapatellar tendinosis, meniscal tear or extrusion, and Baker cyst, were measured and detected. The thickness of knee joint effusion was measured at the maximal anterior-posterior diameter of fluid collection in the suprapatellar pouch. Synovial hypertrophy was identified as the soft tissue of variable echogenicity, which is poorly compressible and nondisplaceable by compression of an ultrasound transducer. The knees were divided into two groups according to the presence of synovial hypertrophy. The differences in clinical and imaging findings between the two groups were evaluated by independent t-test and chi-square test. Results: Synovial hypertrophy was detected in 48 knees of 100 knees on ultrasound. There were no significant differences in demographic parameters and VAS score except in sex between the two groups (P<0.05). Medial meniscal extrusion and tear were significantly more frequent in knees with synovial hypertrophy than those in knees without synovial hypertrophy. K-L grade and joint effusion thickness were greater in patients with synovial hypertrophy than those in patients without synovial hypertrophy (P<0.05). Conclusion: Synovial hypertrophy in knee osteoarthritis was associated with greater suprapatellar joint effusion and higher K-L grade and maybe a characteristic ultrasound feature of late knee osteoarthritis. These results suggest that synovial hypertrophy on ultrasound can be regarded as a predictor of rapid progression in patients with knee osteoarthritis.Keywords: knee osteoarthritis, synovial hypertrophy, ultrasound, K-L grade
Procedia PDF Downloads 752976 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN
Procedia PDF Downloads 1602975 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection
Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young
Abstract:
Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving
Procedia PDF Downloads 2512974 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments
Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie
Abstract:
Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.Keywords: antibody engineering, biosensor, phage display, unnatural amino acids
Procedia PDF Downloads 1462973 Gold Nanoprobes Assay for the Identification of Foodborn Pathogens Such as Staphylococcus aureus, Listeria monocytogenes and Salmonella enteritis
Authors: D. P. Houhoula, J. Papaparaskevas, S. Konteles, A. Dargenta, A. Farka, C. Spyrou, M. Ziaka, S. Koussisis, E. Charvalos
Abstract:
Objectives: Nanotechnology is providing revolutionary opportunities for the rapid and simple diagnosis of many infectious diseases. Staphylococcus aureus, Listeria monocytogenes and Salmonella enteritis are important human pathogens. Diagnostic assays for bacterial culture and identification are time consuming and laborious. There is an urgent need to develop rapid, sensitive, and inexpensive diagnostic tests. In this study, a gold nanoprobe strategy developed and relies on the colorimetric differentiation of specific DNA sequences based approach on differential aggregation profiles in the presence or absence of specific target hybridization. Method: Gold nanoparticles (AuNPs) were purchased from Nanopartz. They were conjugated with thiolated oligonucleotides specific for the femA gene for the identification of members of Staphylococcus aureus, the mecA gene for the differentiation of Staphylococcus aureus and MRSA Staphylococcus aureus, hly gene encoding the pore-forming cytolysin listeriolysin for the identification of Listeria monocytogenes and the invA sequence for the identification of Salmonella enteritis. DNA isolation from Staphylococcus aureus Listeria monocytogenes and Salmonella enteritis cultures was performed using the commercial kit Nucleospin Tissue (Macherey Nagel). Specifically 20μl of DNA was diluted in 10mMPBS (pH5). After the denaturation of 10min, 20μl of AuNPs was added followed by the annealing step at 58oC. The presence of a complementary target prevents aggregation with the addition of acid and the solution remains pink, whereas in the opposite event it turns to purple. The color could be detected visually and it was confirmed with an absorption spectrum. Results: Specifically, 0.123 μg/μl DNA of St. aureus, L.monocytogenes and Salmonella enteritis was serially diluted from 1:10 to 1:100. Blanks containing PBS buffer instead of DNA were used. The application of the proposed method on isolated bacteria produced positive results with all the species of St. aureus and L. monocytogenes and Salmonella enteritis using the femA, mecA, hly and invA genes respectively. The minimum detection limit of the assay was defined at 0.2 ng/μL of DNA. Below of 0.2 ng/μL of bacterial DNA the solution turned purple after addition of HCl, defining the minimum detection limit of the assay. None of the blank samples was positive. The specificity was 100%. The application of the proposed method produced exactly the same results every time (n = 4) the evaluation was repeated (100% repeatability) using the femA, hly and invA genes. Using the gene mecA for the differentiation of Staphylococcus aureus and MRSA Staphylococcus aureus the method had a repeatability 50%. Conclusion: The proposed method could be used as a highly specific and sensitive screening tool for the detection and differentiation of Staphylococcus aureus Listeria monocytogenes and Salmonella enteritis. The use AuNPs for the colorimetric detection of DNA targets represents an inexpensive and easy-to-perform alternative to common molecular assays. The technology described here, may develop into a platform that could accommodate detection of many bacterial species.Keywords: gold nanoparticles, pathogens, nanotechnology, bacteria
Procedia PDF Downloads 3412972 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting
Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey
Abstract:
Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method
Procedia PDF Downloads 782971 Detection of Intravenous Infiltration Using Impedance Parameters in Patients in a Long-Term Care Hospital
Authors: Ihn Sook Jeong, Eun Joo Lee, Jae Hyung Kim, Gun Ho Kim, Young Jun Hwang
Abstract:
This study investigated intravenous (IV) infiltration using bioelectrical impedance for 27 hospitalized patients in a long-term care hospital. Impedance parameters showed significant differences before and after infiltration as follows. First, the resistance (R) after infiltration significantly decreased compared to the initial resistance. This indicates that the IV solution flowing from the vein due to infiltration accumulates in the extracellular fluid (ECF). Second, the relative resistance at 50 kHz was 0.94 ± 0.07 in 9 subjects without infiltration and was 0.75 ± 0.12 in 18 subjects with infiltration. Third, the magnitude of the reactance (Xc) decreased after infiltration. This is because IV solution and blood components released from the vein tend to aggregate in the cell membrane (and acts analogously to the linear/parallel circuit), thereby increasing the capacitance (Cm) of the cell membrane and reducing the magnitude of reactance. Finally, the data points plotted in the R-Xc graph were distributed on the upper right before infiltration but on the lower left after infiltration. This indicates that the infiltration caused accumulation of fluid or blood components in the epidermal and subcutaneous tissues, resulting in reduced resistance and reactance, thereby lowering integrity of the cell membrane. Our findings suggest that bioelectrical impedance is an effective method for detection of infiltration in a noninvasive and quantitative manner.Keywords: intravenous infiltration, impedance, parameters, resistance, reactance
Procedia PDF Downloads 1822970 Field Emission Scanning Microscope Image Analysis for Porosity Characterization of Autoclaved Aerated Concrete
Authors: Venuka Kuruwita Arachchige Don, Mohamed Shaheen, Chris Goodier
Abstract:
Aerated autoclaved concrete (AAC) is known for its lightweight, easy handling, high thermal insulation, and extremely porous structure. Investigation of pore behavior in AAC is crucial for characterizing the material, standardizing design and production techniques, enhancing the mechanical, durability, and thermal performance, studying the effectiveness of protective measures, and analyzing the effects of weather conditions. The significant details of pores are complicated to observe with acknowledged accuracy. The High-resolution Field Emission Scanning Electron Microscope (FESEM) image analysis is a promising technique for investigating the pore behavior and density of AAC, which is adopted in this study. Mercury intrusion porosimeter and gas pycnometer were employed to characterize porosity distribution and density parameters. The analysis considered three different densities of AAC blocks and three layers in the altitude direction within each block. A set of understandings was presented to extract and analyze the details of pore shape, pore size, pore connectivity, and pore percentages from FESEM images of AAC. Average pore behavior outcomes per unit area were presented. Comparison of porosity distribution and density parameters revealed significant variations. FESEM imaging offered unparalleled insights into porosity behavior, surpassing the capabilities of other techniques. The analysis conducted from a multi-staged approach provides porosity percentage occupied by various pore categories, total porosity, variation of pore distribution compared to AAC densities and layers, number of two-dimensional and three-dimensional pores, variation of apparent and matrix densities concerning pore behaviors, variation of pore behavior with respect to aluminum content, and relationship among shape, diameter, connectivity, and percentage in each pore classification.Keywords: autoclaved aerated concrete, density, imaging technique, microstructure, porosity behavior
Procedia PDF Downloads 692969 Hazardous Gas Detection Robot in Coal Mines
Authors: Kanchan J. Kakade, S. A. Annadate
Abstract:
This paper presents design and development of underground coal mine monitoring using mbed arm cortex controller and ZigBee communication. Coal mine is a special type of mine which is dangerous in nature. Safety is the most important feature of a coal industry for proper functioning. It’s not only for employees and workers but also for environment and nation. Many coal producing countries in the world face phenomenal frequently occurred accidents in coal mines viz, gas explosion, flood, and fire breaking out during coal mines exploitation. Thus, such emissions of various gases from coal mines are necessary to detect with the help of robot. Coal is a combustible, sedimentary, organic rock, which is made up of mainly carbon, hydrogen and oxygen. Coal Mine Detection Robot mainly detects mash gas and carbon monoxide. The mash gas is the kind of the mixed gas which mainly make up of methane in the underground of the coal mine shaft, and sometimes it abbreviate to methane. It is formed from vegetation, which has been fused between other rock layers and altered by the combined effects of heat and pressure over millions of years to form coal beds. Coal has many important uses worldwide. The most significant uses of coal are in electricity generation, steel production, cement manufacturing and as a liquid fuel.Keywords: Zigbee communication, various sensors, hazardous gases, mbed arm cortex M3 core controller
Procedia PDF Downloads 4682968 Test of Moisture Sensor Activation Speed
Authors: I. Parkova, A. Vališevskis, A. Viļumsone
Abstract:
Nocturnal enuresis or bed-wetting is intermittent incontinence during sleep of children after age 5 that may precipitate wide range of behavioural and developmental problems. One of the non-pharmacological treatment methods is the use of a bed-wetting alarm system. In order to improve comfort conditions of nocturnal enuresis alarm system, modular moisture sensor should be replaced by a textile sensor. In this study behaviour and moisture detection speed of woven and sewn sensors were compared by analysing change in electrical resistance after solution (salt water) was dripped on sensor samples. Material of samples has different structure and yarn location, which affects solution detection rate. Sensor system circuit was designed and two sensor tests were performed: system activation test and false alarm test to determine the sensitivity of the system and activation threshold. Sewn sensor had better result in system’s activation test – faster reaction, but woven sensor had better result in system’s false alarm test – it was less sensitive to perspiration simulation. After experiments it was found that the optimum switching threshold is 3V in case of 5V input voltage, which provides protection against false alarms, for example – during intensive sweating.Keywords: conductive yarns, moisture textile sensor, industry, material
Procedia PDF Downloads 2462967 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics
Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki
Abstract:
The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio
Procedia PDF Downloads 1752966 Performance and Damage Detection of Composite Structural Insulated Panels Subjected to Shock Wave Loading
Authors: Anupoju Rajeev, Joanne Mathew, Amit Shelke
Abstract:
In the current study, a new type of Composite Structural Insulated Panels (CSIPs) is developed and investigated its performance against shock loading which can replace the conventional wooden structural materials. The CSIPs is made of Fibre Cement Board (FCB)/aluminum as the facesheet and the expanded polystyrene foam as the core material. As tornadoes are very often in the western countries, it is suggestable to monitor the health of the CSIPs during its lifetime. So, the composite structure is installed with three smart sensors located randomly at definite locations. Each smart sensor is fabricated with an embedded half stainless phononic crystal sensor attached to both ends of the nylon shaft that can resist the shock and impact on facesheet as well as polystyrene foam core and safeguards the system. In addition to the granular crystal sensors, the accelerometers are used in the horizontal spanning and vertical spanning with a definite offset distance. To estimate the health and damage of the CSIP panel using granular crystal sensor, shock wave loading experiments are conducted. During the experiments, the time of flight response from the granular sensors is measured. The main objective of conducting shock wave loading experiments on the CSIP panels is to study the effect and the sustaining capacity of the CSIP panels in the extreme hazardous situations like tornados and hurricanes which are very common in western countries. The effects have been replicated using a shock tube, an instrument that can be used to create the same wind and pressure intensity of tornado for the experimental study. Numerous experiments have been conducted to investigate the flexural strength of the CSIP. Furthermore, the study includes the damage detection using three smart sensors embedded in the CSIPs during the shock wave loading.Keywords: composite structural insulated panels, damage detection, flexural strength, sandwich structures, shock wave loading
Procedia PDF Downloads 1462965 Morpho-Dynamic Modelling of the Western 14 Km of the Togolese Coast
Authors: Sawsan Eissa, Omnia Kabbany
Abstract:
The coastline of Togo has been historically suffering from erosion for decades, which requires a solution to help control and reduce the erosion to allow for the development of the coastal area. A morpho-dynamic model using X-beach software was developed for the Western 14 Km of the Togolese coast. The model was coupled with the hydrodynamic module of DELFT 3D, flow, and the Wave module, SWAN. The data used as input included a recent bathymetric survey, a recent shoreline topographic survey, aerial photographs, ERA 5 water level and wave data, and recent test results of seabed samples. A number of scenarios were modeled: do nothing scenario, groynes, detached breakwaters system with different crest levels and alignments. The findings showed that groynes is not expected to be effective for protection against erosion, and that the best option is a system of detached breakwater, partially emerged-partially submerged couples with periodical maintenance.Keywords: hydrodynamics, morphology, Togo, Delft3D, SWAN, XBeach, coastal erosion, detached breakwaters
Procedia PDF Downloads 682964 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 892963 Development, Evaluation and Scale-Up of a Mental Health Care Plan (MHCP) in Nepal
Authors: Nagendra P. Luitel, Mark J. D. Jordans
Abstract:
Globally, there is a significant gap between the number of individuals in need of mental health care and those who actually receive treatment. The evidence is accumulating that mental health services can be delivered effectively by primary health care workers through community-based programs and task-sharing approaches. Changing the role of specialist mental health workers from service delivery to building clinical capacity of the primary health care (PHC) workers could help in reducing treatment gap in low and middle-income countries (LMICs). We developed a comprehensive mental health care plan in 2012 and evaluated its feasibility and effectiveness over the past three years. Initially, a mixed method formative study was conducted for the development of mental health care plan (MHCP). Routine monitoring and evaluation data, including client flow and reports of satisfaction, were obtained from beneficiaries (n=135) during the pilot-testing phase. Repeated community survey (N=2040); facility detection survey (N=4704) and the cohort study (N=576) were conducted for evaluation of the MHCP. The resulting MHCP consists of twelve packages divided over the community, health facility, and healthcare organization platforms. Detection of mental health problems increased significantly after introducing MHCP. Service implementation data support the real-life applicability of the MHCP, with reasonable treatment uptake. Currently, MHCP has been implemented in the entire Chitwan district where over 1400 people (438 people with depression, 406 people with psychosis, 181 people with epilepsy, 360 people with alcohol use disorder and 51 others) have received mental health services from trained health workers. Key barriers were identified and addressed, namely dissatisfaction with privacy, perceived burden among health workers, high drop-out rates and continue the supply of medicines. The results indicated that involvement of PHC workers in detection and management of mental health problems is an effective strategy to minimize treatment gap on mental health care in Nepal.Keywords: mental health, Nepal, primary care, treatment gap
Procedia PDF Downloads 2952962 A New Family of Flying Wing Low Reynolds Number Airfoils
Authors: Ciro Sobrinho Campolina Martins, Halison da Silva Pereira, Vitor Mainenti Leal Lopes
Abstract:
Unmanned Aerial vehicles (UAVs) has been used in a wide range of applications, from precise agriculture monitoring for irrigation and fertilization to military attack missions. Long range performance is required for many of these applications. Tailless aircrafts are commonly used as long-range configurations and, due to its small amount of stability, the airfoil shape design of its wings plays a central role on the performance of the airplane. In this work, a new family of flying wing airfoils is designed for low Reynolds number flows, typical of small-middle UAVs. Camber, thickness and their maximum positions in the chord are variables used for the airfoil geometry optimization. Aerodynamic non-dimensional coefficients were obtained by the well-established Panel Method. High efficient airfoils with small pitch moment coefficient are obtained from the analysis described and its aerodynamic polars are plotted.Keywords: airfoil design, flying wing, low Reynolds number, tailless aircraft, UAV
Procedia PDF Downloads 6292961 A New DIDS Design Based on a Combination Feature Selection Approach
Authors: Adel Sabry Eesa, Adnan Mohsin Abdulazeez Brifcani, Zeynep Orman
Abstract:
Feature selection has been used in many fields such as classification, data mining and object recognition and proven to be effective for removing irrelevant and redundant features from the original data set. In this paper, a new design of distributed intrusion detection system using a combination feature selection model based on bees and decision tree. Bees algorithm is used as the search strategy to find the optimal subset of features, whereas decision tree is used as a judgment for the selected features. Both the produced features and the generated rules are used by Decision Making Mobile Agent to decide whether there is an attack or not in the networks. Decision Making Mobile Agent will migrate through the networks, moving from node to another, if it found that there is an attack on one of the nodes, it then alerts the user through User Interface Agent or takes some action through Action Mobile Agent. The KDD Cup 99 data set is used to test the effectiveness of the proposed system. The results show that even if only four features are used, the proposed system gives a better performance when it is compared with the obtained results using all 41 features.Keywords: distributed intrusion detection system, mobile agent, feature selection, bees algorithm, decision tree
Procedia PDF Downloads 4082960 An Analytical Study of Small Unmanned Arial Vehicle Dynamic Stability Characteristics
Authors: Abdelhakam A. Noreldien, Sakhr B. Abudarag, Muslim S. Eltoum, Salih O. Osman
Abstract:
This paper presents an analytical study of Small Unmanned Aerial Vehicle (SUAV) dynamic stability derivatives. Simulating SUAV dynamics and analyzing its behavior at the earliest design stages is too important and more efficient design aspect. The approach suggested in this paper is using the wind tunnel experiment to collect the aerodynamic data and get the dynamic stability derivatives. AutoCAD Software was used to draw the case study (wildlife surveillance SUAV). The SUAV is scaled down to be 0.25% of the real SUAV dimensions and converted to a wind tunnel model. The model was tested in three different speeds for three different attitudes which are; pitch, roll and yaw. The wind tunnel results were then used to determine the case study stability derivative values, and hence it used to calculate the roots of the characteristic equation for both longitudinal and lateral motions. Finally, the characteristic equation roots were found and discussed in all possible cases.Keywords: model, simulating, SUAV, wind tunnel
Procedia PDF Downloads 3752959 Detection of PCD-Related Transcription Factors for Improving Salt Tolerance in Plant
Authors: A. Bahieldin, A. Atef, S. Edris, N. O. Gadalla, S. M. Hassan, M. A. Al-Kordy, A. M. Ramadan, A. S. M. Al- Hajar, F. M. El-Domyati
Abstract:
The idea of this work is based on a natural exciting phenomenon suggesting that suppression of genes related to the program cell death (or PCD) mechanism might help the plant cells to efficiently tolerate abiotic stresses. The scope of this work was the detection of PCD-related transcription factors (TFs) that might also be related to salt stress tolerance in plant. Two model plants, e.g., tobacco and Arabidopsis, were utilized in order to investigate this phenomenon. Occurrence of PCD was first proven by Evans blue staining and DNA laddering after tobacco leaf discs were treated with oxalic acid (OA) treatment (20 mM) for 24 h. A number of 31 TFs up regulated after 2 h and co-expressed with genes harboring PCD-related domains were detected via RNA-Seq analysis and annotation. These TFs were knocked down via virus induced gene silencing (VIGS), an RNA interference (RNAi) approach, and tested for their influence on triggering PCD machinery. Then, Arabidopsis SALK knocked out T-DNA insertion mutants in selected TFs analogs to those in tobacco were tested under salt stress (up to 250 mM NaCl) in order to detect the influence of different TFs on conferring salt tolerance in Arabidopsis. Involvement of a number of candidate abiotic-stress related TFs was investigated.Keywords: VIGS, PCD, RNA-Seq, transcription factors
Procedia PDF Downloads 2742958 Evaluation of Traumatic Spine by Magnetic Resonance Imaging
Authors: Sarita Magu, Deepak Singh
Abstract:
Study Design: This prospective study was conducted at the department of Radio Diagnosis, at Pt B.D. Sharma PGIMS, Rohtak in 57 patients of spine injury on radiographs or radiographically normal patients with neurological deficits presenting within 72 hours of injury. Aims: Evaluation of the role of Magnetic Resonance Imaging (MRI) in Spinal Trauma Patients and to compare MRI findings with clinical profile and neurological status of the patient and to correlate the MRI findings with neurological recovery of the patient and predict the outcome. Material and Methods: Neurological status of patients was assessed at the time of admission and discharge in all the patients and at long term interval of six months to one year in 27 patients as per American spine injury association classification (ASIA). On MRI cord injury was categorized into cord hemorrhage, cord contusion, cord edema only, and normal cord. Quantitative assessment of injury on MRI was done using mean canal compromise (MCC), mean spinal cord compression (MSCC) and lesion length. Neurological status at admission and neurological recovery at discharge and long term follow up was compared with various qualitative cord findings and quantitative parameters on MRI. Results: Cord edema and normal cord was associated with favorable neurological outcome. Cord contusion show lesser neurological recovery as compared to cord edema. Cord hemorrhage was associated with worst neurological status at admission and poor neurological recovery. Mean MCC, MSCC, and lesion length values were higher in patients presenting with ASIA A grade injury and showed decreasing trends towards ASIA E grade injury. Patients showing neurological recovery over the period of hospital stay and long term follow up had lower mean MCC, MSCC, and lesion length as compared to patients showing no neurological recovery. The data was statistically significant with p value <.05. Conclusion: Cord hemorrhage and higher MCC, MSCC and lesion length has poor prognostic value in spine injury patients.Keywords: spine injury, cord hemorrhage, cord contusion, MCC, MSCC, lesion length, ASIA grading
Procedia PDF Downloads 3552957 Vibratinal Spectroscopic Identification of Beta-Carotene in Usnic Acid and PAHs as a Potential Martian Analogue
Authors: A. I. Alajtal, H. G. M. Edwards, M. A. Elbagermi
Abstract:
Raman spectroscopy is currently a part of the instrumentation suite of the ESA ExoMars mission for the remote detection of life signatures in the Martian surface and subsurface. Terrestrial analogues of Martian sites have been identified and the biogeological modifications incurred as a result of extremophilic activity have been studied. Analytical instrumentation protocols for the unequivocal detection of biomarkers in suitable geological matrices are critical for future unmanned explorations, including the forthcoming ESA ExoMars mission to search for life on Mars scheduled for 2018 and Raman spectroscopy is currently a part of the Pasteur instrumentation suite of this mission. Here, Raman spectroscopy using 785nm excitation was evaluated for determining various concentrations of beta-carotene in admixture with polyaromatic hydrocarbons and usnic acid have been investigated by Raman microspectrometry to determine the lowest levels detectable in simulation of their potential identification remotely in geobiological conditions in Martian scenarios. Information from this study will be important for the development of a miniaturized Raman instrument for targetting Martian sites where the biosignatures of relict or extant life could remain in the geological record.Keywords: raman spectroscopy, mars-analog, beta-carotene, PAHs
Procedia PDF Downloads 3382956 Poly (L-Lysine)-Coated Liquid Crystal Droplets for Sensitive Detection of DNA and Its Applications in Controlled Release of Drug Molecules
Authors: Indu Verma, Santanu Kumar Pal
Abstract:
Interactions between DNA and adsorbed Poly (L-lysine) (PLL) on liquid crystal (LC) droplets were investigated using polarizing optical microcopy (POM) and epi-fluorescence microscopy. Earlier, we demonstrated that adsorption of PLL to the LC/aqueous interface resulted in homeotropic orientation of the LC and thus exhibited a radial configuration of the LC confined within the droplets. Subsequent adsorption of DNA (single stranded DNA/double stranded DNA) at PLL coated LC droplets was found to trigger a LC reorientation within the droplets leading to pre-radial/bipolar configuration of those droplets. To our surprise, subsequent exposure of complementary ssDNA (c-ssDNA) to ssDNA/ adsorbed PLL modified LC droplets did not cause the LC reorientation. This is likely due to the formation of polyplexes (DNA-PLL complex) as confirmed by fluorescence microscopy and atomic force microscopy. In addition, dsDNA adsorbed PLL droplets have been found to be effectively used to displace (controlled release) propidium iodide (a model drug) encapsulated within dsDNA over time. These observations suggest the potential for a label free droplet based LC detection system that can respond to DNA and may provide a simple method to develop DNA-based drug nano-carriers.Keywords: DNA biosensor, drug delivery, interfaces, liquid crystal droplets
Procedia PDF Downloads 2982955 Contemporary Technological Developments in Urban Warfare
Authors: Mehmet Ozturk, Serdal Akyuz, Halit Turan
Abstract:
By the evolving technology, the nature of the war has been changed since the beginning of the history. In the first generation war, the bayonet came to the fore in battlefields; successively; in the second-generation firepower; in the third generation maneuver. Today, in the fourth-generation, fighters, sides, and even fighters’ borders are unclear; consequently, lines of the battles have lost their significance. Furthermore, the actors in the battles can be state or non-state, military, paramilitary or civilian. In order to change the balance according to their interests, parties have utilized the urban areas as warfare. The main reason for using urban areas as a battlefield is the imbalance between parties. To balance the power strength, exploiting technological developments has utmost importance. There are many newly developed technologies for urban warfare such as change in the size of the unmanned aerial vehicle, increased usage of unmanned ground vehicles (especially in supply and evacuation purposes), systems showing the behind of the wall, simulations used for educational purposes. This study will focus on the technological equipment being used for urban warfare.Keywords: urban warfare, unmanned ground vehicles, technological developments, nature of the war
Procedia PDF Downloads 4192954 Comparison of Real-Time PCR and FTIR with Chemometrics Technique in Analysing Halal Supplement Capsules
Authors: Mohd Sukri Hassan, Ahlam Inayatullah Badrul Munir, M. Husaini A. Rahman
Abstract:
Halal authentication and verification in supplement capsules are highly required as the gelatine available in the market can be from halal or non-halal sources. It is an obligation for Muslim to consume and use the halal consumer goods. At present, real-time polymerase chain reaction (RT-PCR) is the most common technique being used for the detection of porcine and bovine DNA in gelatine due to high sensitivity of the technique and higher stability of DNA compared to protein. In this study, twenty samples of supplements capsules from different products with different Halal logos were analyzed for porcine and bovine DNA using RT-PCR. Standard bovine and porcine gelatine from eurofins at a range of concentration from 10-1 to 10-5 ng/µl were used to determine the linearity range, limit of detection and specificity on RT-PCR (SYBR Green method). RT-PCR detected porcine (two samples), bovine (four samples) and mixture of porcine and bovine (six samples). The samples were also tested using FT-IR technique where normalized peak of IR spectra were pre-processed using Savitsky Golay method before Principal Components Analysis (PCA) was performed on the database. Scores plot of PCA shows three clusters of samples; bovine, porcine and mixture (bovine and porcine). The RT-PCR and FT-IR with chemometrics technique were found to give same results for porcine gelatine samples which can be used for Halal authentication.Keywords: halal, real-time PCR, gelatine, chemometrics
Procedia PDF Downloads 2412953 Development of a Device for Detecting Fluids in the Esophagus
Authors: F. J. Puertas, M. Castro, A. Tebar, P. J. Fito, R. Gadea, J. M. Monzó, R. J. Colom
Abstract:
There is a great diversity of diseases that affect the integrity of the walls of the esophagus, generally of a digestive nature. Among them, gastroesophageal reflux is a common disease in the general population, affecting the patient's quality of life; however, there are still unmet diagnostic and therapeutic issues. The consequences of untreated or asymptomatic acid reflux on the esophageal mucosa are not only pain, heartburn, and acid regurgitation but also an increased risk of esophageal cancer. Currently, the diagnostic methods to detect problems in the esophageal tract are invasive and annoying, as 24-hour impedance-pH monitoring forces the patient to be uncomfortable for hours to be able to make a correct diagnosis. In this work, the development of a sensor able to measure in depth is proposed, allowing the detection of liquids circulating in the esophageal tract. The multisensor detection system is based on radiofrequency photospectrometry. At an experimental level, consumers representative of the population in terms of sex and age have been used, placing the sensors between the trachea and the diaphragm analyzing the measurements in vacuum, water, orange juice and saline medium. The results obtained have allowed us to detect the appearance of different liquid media in the esophagus, segregating them based on their ionic content.Keywords: bioimpedance, dielectric spectroscopy, gastroesophageal reflux, GERD
Procedia PDF Downloads 1012952 Postmortem Magnetic Resonance Imaging as an Objective Method for the Differential Diagnosis of a Stillborn and a Neonatal Death
Authors: Uliana N. Tumanova, Sergey M. Voevodin, Veronica A. Sinitsyna, Alexandr I. Shchegolev
Abstract:
An important part of forensic and autopsy research in perinatology is the answer to the question of life and stillbirth. Postmortem magnetic resonance imaging (MRI) is an objective non-invasive research method that allows to store data for a long time and not to exhume the body to clarify the diagnosis. The purpose of the research is to study the possibilities of a postmortem MRI to determine the stillbirth and death of a newborn who had spontaneous breathing and died on the first day after birth. MRI and morphological data of a study of 23 stillborn bodies, prenatally dead at a gestational age of 22-39 weeks (Group I) and the bodies of 16 newborns who died from 2 to 24 hours after birth (Group II) were compared. Before the autopsy, postmortem MRI was performed on the Siemens Magnetom Verio 3T device in the supine position of the body. The control group for MRI studies consisted of 7 live newborns without lung disease (Group III). On T2WI in the sagittal projection was measured MR-signal intensity (SI) in the lung tissue (L) and shoulder muscle (M). During the autopsy, a pulmonary swimming test was evaluated, and macro- and microscopic studies were performed. According to the postmortem MRI, the highest values of mean SI of the lung (430 ± 27.99) and of the muscle (405.5 ± 38.62) on T2WI were detected in group I and exceeded the corresponding value of group II by 2.7 times. The lowest values were found in the control group - 77.9 ± 12.34 and 119.7 ± 6.3, respectively. In the group II, the lung SI was 1.6 times higher than the muscle SI, whereas in the group I and in the control group, the muscle SI was 2.1 times and 1.8 times larger than the lung. On the basis of clinical and morphological data, we calculated the formula for determining the breathing index (BI) during postmortem MRI: BI = SIL x SIM / 100. The mean value of BI in the group I (1801.14 ± 241.6) (values ranged from 756 to 3744) significantly higher than the corresponding average value of BI in the group II (455.89 ± 137.32, p < 0.05) (305-638.4). In the control group, the mean BI value was 91.75 ± 13.3 (values ranged from 53 to 154). The BI with the results of pulmonary swimming tests and microscopic examination of the lungs were compared. The boundary value of BI for the differential diagnosis of stillborn and newborn death was 700. Using the postmortem MRI allows to differentiate the stillborn with the death of the breathing newborn.Keywords: lung, newborn, postmortem MRI, stillborn
Procedia PDF Downloads 1282951 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 572950 Documenting the 15th Century Prints with RTI
Authors: Peter Fornaro, Lothar Schmitt
Abstract:
The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.Keywords: art history, computational photography, paste prints, reflectance transformation imaging
Procedia PDF Downloads 276