Search results for: real estate price prediction
1988 Role of Activated Partial Thromboplastin Time (APTT) to Assess the Need of Platelet Transfusion in Dengue
Authors: Kalyan Koganti
Abstract:
Background: In India, platelet transfusions are given to large no. of patients suffering from dengue due to the fear of bleeding especially when the platelet counts are low. Though many patients do not bleed when the platelet count falls to less than 20,000, certain patients bleed even if the platelet counts are more than 20,000 without any comorbid condition (like gastrointestinal ulcer) in the past. This fear has led to huge amounts of unnecessary platelet transfusions which cause significant economic burden to low and middle-income countries like India and also sometimes these transfusions end with transfusion-related adverse reactions. Objective: To identify the role of Activated Partial Thromboplastin Time (APTT) in comparison with thrombocytoenia as an indicator to assess the real need of platelet transfusions. Method: A prospective study was conducted at a hospital in South India which included 176 admitted cases of dengue confirmed by immunochromatography. APTT was performed in all these patients along with platelet count. Cut off values of > 60 seconds for APTT and < 20,000 for platelet count were considered to assess the bleeding manifestations. Results: Among the total 176 patients, 56 patients had bleeding manifestations like malena, hematuria, bleeding gums etc. APTT > 60 seconds had a sensitivity and specificity of 93% and 90% respectively in identifying bleeding manifestations where as platelet count of < 20,000 had a sensitivity and specificity of 64% and 73% respectively. Conclusion: Elevated APTT levels can be considered as an indicator to assess the need of platelet transfusion in dengue. As there is a significant variation among patients who bleed with respect to platelet count, APTT can be considered to avoid unnecessary transfusions.Keywords: activated partial thromboplastin time, dengue, platelet transfusion, thrombocytopenia
Procedia PDF Downloads 2161987 Electrochemical Sensor Based on Poly(Pyrogallol) for the Simultaneous Detection of Phenolic Compounds and Nitrite in Wastewater
Authors: Majid Farsadrooh, Najmeh Sabbaghi, Seyed Mohammad Mostashari, Abolhasan Moradi
Abstract:
Phenolic compounds are chief environmental contaminants on account of their hazardous and toxic nature on human health. The preparation of sensitive and potent chemosensors to monitor emerging pollution in water and effluent samples has received great consideration. A novel and versatile nanocomposite sensor based on poly pyrogallol is presented for the first time in this study, and its electrochemical behavior for simultaneous detection of hydroquinone (HQ), catechol (CT), and resorcinol (RS) in the presence of nitrite is evaluated. The physicochemical characteristics of the fabricated nanocomposite were investigated by emission-scanning electron microscopy (FE-SEM), energy-dispersive X-ray spectroscopy (EDS), and Brunauer-Emmett-Teller (BET). The electrochemical response of the proposed sensor to the detection of HQ, CT, RS, and nitrite is studied using cyclic voltammetry (CV), chronoamperometry (CA), differential pulse voltammetry (DPV), and electrochemical impedance spectroscopy (EIS). The kinetic characterization of the prepared sensor showed that both adsorption and diffusion processes can control reactions at the electrode. In the optimized conditions, the new chemosensor provides a wide linear range of 0.5-236.3, 0.8-236.3, 0.9-236.3, and 1.2-236.3 μM with a low limit of detection of 21.1, 51.4, 98.9, and 110.8 nM (S/N = 3) for HQ, CT and RS, and nitrite, respectively. Remarkably, the electrochemical sensor has outstanding selectivity, repeatability, and stability and is successfully employed for the detection of RS, CT, HQ, and nitrite in real water samples with the recovery of 96.2%–102.4%, 97.8%-102.6%, 98.0%–102.4% and 98.4%–103.2% for RS, CT, HQ, and nitrite, respectively. These outcomes illustrate that poly pyrogallol is a promising candidate for effective electrochemical detection of dihydroxybenzene isomers in the presence of nitrite.Keywords: electrochemical sensor, poly pyrogallol, phenolic compounds, simultaneous determination
Procedia PDF Downloads 681986 Microbial Contaminants in Drinking Water Collected from Different Regions of Kuwait
Authors: Abu Salim Mustafa
Abstract:
Water plays a major role in maintaining life on earth, but it can also serve as a matrix for pathogenic organisms, posing substantial health threats to humans. Although, outbreaks of diseases attributable to drinking water may not be common in industrialized countries, they still occur and can lead to serious acute, chronic, or sometimes fatal health consequences. The analysis of drinking water samples from different regions of Kuwait was performed in this study for bacterial and viral contaminations. Drinking tap water samples were collected from 15 different locations of the six Kuwait governorates. All samples were analyzed by confocal microscopy for the presence of bacteria. The samples were cultured in vitro to detect cultivable organisms. DNA was isolated from the cultured organisms and the identity of the bacteria was determined by sequencing the bacterial 16S rRNA genes, followed by BLAST analysis in the database of NCBI, USA. RNA was extracted from water samples and analyzed by real-time PCR for the detection of viruses with potential health risks, i.e. Astrovirus, Enterovirus, Norovirus, Rotavirus, and Hepatitis A. Confocal microscopy showed the presence of bacteria in some water samples. The 16S rRNA gene sequencing of culture grown organisms, followed by BLAST analysis, identified the presence of several non-pathogenic bacterial species. However, one sample had Acinetobacter baumannii, which often causes opportunistic infections in immunocompromised people, but none of the studied viruses could be detected in the drinking water samples analyzed. The results indicate that drinking water samples analyzed from various locations in Kuwait are relatively safe for drinking and do not contain many harmful pathogens.Keywords: drinking water, microbial contaminant, 16S rDNA, Kuwait
Procedia PDF Downloads 1551985 Contribution of PALB2 and BLM Mutations to Familial Breast Cancer Risk in BRCA1/2 Negative South African Breast Cancer Patients Detected Using High-Resolution Melting Analysis
Authors: N. C. van der Merwe, J. Oosthuizen, M. F. Makhetha, J. Adams, B. K. Dajee, S-R. Schneider
Abstract:
Women representing high-risk breast cancer families, who tested negative for pathogenic mutations in BRCA1 and BRCA2, are four times more likely to develop breast cancer compared to women in the general population. Sequencing of genes involved in genomic stability and DNA repair led to the identification of novel contributors to familial breast cancer risk. These include BLM and PALB2. Bloom's syndrome is a rare homozygous autosomal recessive chromosomal instability disorder with a high incidence of various types of neoplasia and is associated with breast cancer when in a heterozygous state. PALB2, on the other hand, binds to BRCA2 and together, they partake actively in DNA damage repair. Archived DNA samples of 66 BRCA1/2 negative high-risk breast cancer patients were retrospectively selected based on the presence of an extensive family history of the disease ( > 3 affecteds per family). All coding regions and splice-site boundaries of both genes were screened using High-Resolution Melting Analysis. Samples exhibiting variation were bi-directionally automated Sanger sequenced. The clinical significance of each variant was assessed using various in silico and splice site prediction algorithms. Comprehensive screening identified a total of 11 BLM and 26 PALB2 variants. The variants detected ranged from global to rare and included three novel mutations. Three BLM and two PALB2 likely pathogenic mutations were identified that could account for the disease in these extensive breast cancer families in the absence of BRCA mutations (BLM c.11T > A, p.V4D; BLM c.2603C > T, p.P868L; BLM c.3961G > A, p.V1321I; PALB2 c.421C > T, p.Gln141Ter; PALB2 c.508A > T, p.Arg170Ter). Conclusion: The study confirmed the contribution of pathogenic mutations in BLM and PALB2 to the familial breast cancer burden in South Africa. It explained the presence of the disease in 7.5% of the BRCA1/2 negative families with an extensive family history of breast cancer. Segregation analysis will be performed to confirm the clinical impact of these mutations for each of these families. These results justify the inclusion of both these genes in a comprehensive breast and ovarian next generation sequencing cancer panel and should be screened simultaneously with BRCA1 and BRCA2 as it might explain a significant percentage of familial breast and ovarian cancer in South Africa.Keywords: Bloom Syndrome, familial breast cancer, PALB2, South Africa
Procedia PDF Downloads 2361984 Knowledge, Attitude, and Practice among Medical Students Regarding Basic Life Support
Authors: Sumia Fatima, Tayyaba Idrees
Abstract:
Cardiac Arrest and Heart Failures are an important causes of mortality in developed and developing countries and even a second spent without Cardiopulmonary Resuscitation (CPR) increases the risk of mortality. Youngs doctors are expected to partake in CPR from the first day and if they are not taught basic life support (BLS) skills during their studies. They have next to no opportunity to learn them in clinical settings. To determine the exact level of knowledge of Basic Life Support among medical students. To compare the degree of knowledge among 1st and 2nd year medical students of RMU (Rawalpindi Medical University), using self-structured questionnaires. A cross sectional, qualitative primary study was conducted in March 2020 in order to analyse theoretical and practical knowledge of Basic Life Support among Medical Students of 1st and 2nd year MBBS. Self-Structured Questionnaires were distributed among 300 students, 150 from 1st year and 150 from 2nd year. Data was analysed using SPSS v 22. Chi Square test was employed. The results showed that only 13 (4%) students had received formal BLS training.129 (42%) students had encountered accidents in real life but had not known how to react. Majority responded that Basic Life Support should be made part of medical college curriculum (189 students), 194 participants (64%) had moderate knowledge of both theoretical and practical aspects of BLS. 75-80% students of both 1st and 2nd year had only moderate knowledge, which must be improved for them to be better healthcare providers in future. It was also found that male students had more practical knowledge than females, but both had almost the same proficiency in theoretical knowledge. The study concluded that the level of knowledge of BLS among the students was not up to the mark, and there is a dire need to include BLS training in the medical colleges’ curriculum.Keywords: basic cardiac life support, cardiac arrest, awareness, medical students
Procedia PDF Downloads 931983 Enhanced Flight Dynamics Model to Simulate the Aircraft Response to Gust Encounters
Authors: Castells Pau, Poetsch Christophe
Abstract:
The effect of gust and turbulence encounters on aircraft is a wide field of study which allows different approaches, from high-fidelity multidisciplinary simulations to more simplified models adapted to industrial applications. The typical main goal is to predict the gust loads on the aircraft in order to ensure a safe design and achieve certification. Another topic widely studied is the gust loads reduction through an active control law. The impact of gusts on aircraft handling qualities is of interest as well in the analysis of in-service events so as to evaluate the aircraft response and the performance of the flight control laws. Traditionally, gust loads and handling qualities are addressed separately with different models adapted to the specific needs of each discipline. In this paper, an assessment of the differences between both models is presented and a strategy to better account for the physics of gust encounters in a typical flight dynamics model is proposed based on the model used for gust loads analysis. The applied corrections aim to capture the gust unsteady aerodynamics and propagation as well as the effect of dynamic flexibility at low frequencies. Results from the gust loads model at different flight conditions and measures from real events are used for validation. An assessment of a possible extension of steady aerodynamic nonlinearities to low frequency range is also addressed. The proposed corrections provide meaningful means to evaluate the performance and possible adjustments of the flight control laws.Keywords: flight dynamics, gust loads, handling qualities, unsteady aerodynamics
Procedia PDF Downloads 1471982 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System
Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee
Abstract:
In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.Keywords: augmented reality framework, server-client model, vision-based tracking, image search
Procedia PDF Downloads 2751981 Reflections of Nocturnal Librarian: Attaining a Work-Life Balance in a Mega-City of Lagos State Nigeria
Authors: Oluwole Durodolu
Abstract:
The rationale for this study is to explore the adaptive strategy that librarians adopt in performing night shifts in a mega-city like Lagos state. Maslach Burnout Theory would be used to measure the three proportions of burnout in understanding emotional exhaustion, depersonalisation, and individual accomplishment to scrutinise job-related burnout syndrome allied with longstanding, unsolved stress. The qualitative methodology guided by a phenomenological research paradigm, which is an approach that focuses on the commonality of real-life experience in a particular group, would be used, focus group discussion adopted as a method of data collection from library staff who are involved in night-shift. The participant for the focus group discussion would be selected using a convenience sampling technique in which staff at the cataloguing unit would be included in the sample because of the representative characteristics of the unit. This would be done to enable readers to understand phenomena as it is reasonable than from a remote perspective. The exploratory interviews which will be in focus group method to shed light on issues relating to security, housing, transportation, budgeting, energy supply, employee duties, time management, information access, and sustaining professional levels of service and how all these variables affect the productivity of all the 149 library staff and their work-life balance.Keywords: nightshift, work-life balance, mega-city, academic library, Maslach Burnout Theory, Lagos State, University of Lagos
Procedia PDF Downloads 1321980 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)
Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton
Abstract:
Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference
Procedia PDF Downloads 1081979 Achieving Product Robustness through Variation Simulation: An Industrial Case Study
Authors: Narendra Akhadkar, Philippe Delcambre
Abstract:
In power protection and control products, assembly process variations due to the individual parts manufactured from single or multi-cavity tooling is a major problem. The dimensional and geometrical variations on the individual parts, in the form of manufacturing tolerances and assembly tolerances, are sources of clearance in the kinematic joints, polarization effect in the joints, and tolerance stack-up. All these variations adversely affect the quality of product, functionality, cost, and time-to-market. Variation simulation analysis may be used in the early product design stage to predict such uncertainties. Usually, variations exist in both manufacturing processes and materials. In the tolerance analysis, the effect of the dimensional and geometrical variations of the individual parts on the functional characteristics (conditions) of the final assembled products are studied. A functional characteristic of the product may be affected by a set of interrelated dimensions (functional parameters) that usually form a geometrical closure in a 3D chain. In power protection and control products, the prerequisite is: when a fault occurs in the electrical network, the product must respond quickly to react and break the circuit to clear the fault. Usually, the response time is in milliseconds. Any failure in clearing the fault may result in severe damage to the equipment or network, and human safety is at stake. In this article, we have investigated two important functional characteristics that are associated with the robust performance of the product. It is demonstrated that the experimental data obtained at the Schneider Electric Laboratory prove the very good prediction capabilities of the variation simulation performed using CETOL (tolerance analysis software) in an industrial context. Especially, this study allows design engineers to better understand the critical parts in the product that needs to be manufactured with good, capable tolerances. On the contrary, some parts are not critical for the functional characteristics (conditions) of the product and may lead to some reduction of the manufacturing cost, ensuring robust performance. The capable tolerancing is one of the most important aspects in product and manufacturing process design. In the case of miniature circuit breaker (MCB), the product's quality and its robustness are mainly impacted by two aspects: (1) allocation of design tolerances between the components of a mechanical assembly and (2) manufacturing tolerances in the intermediate machining steps of component fabrication.Keywords: geometrical variation, product robustness, tolerance analysis, variation simulation
Procedia PDF Downloads 1641978 Hand Gesture Interface for PC Control and SMS Notification Using MEMS Sensors
Authors: Keerthana E., Lohithya S., Harshavardhini K. S., Saranya G., Suganthi S.
Abstract:
In an epoch of expanding human-machine interaction, the development of innovative interfaces that bridge the gap between physical gestures and digital control has gained significant momentum. This study introduces a distinct solution that leverages a combination of MEMS (Micro-Electro-Mechanical Systems) sensors, an Arduino Mega microcontroller, and a PC to create a hand gesture interface for PC control and SMS notification. The core of the system is an ADXL335 MEMS accelerometer sensor integrated with an Arduino Mega, which communicates with a PC via a USB cable. The ADXL335 provides real-time acceleration data, which is processed by the Arduino to detect specific hand gestures. These gestures, such as left, right, up, down, or custom patterns, are interpreted by the Arduino, and corresponding actions are triggered. In the context of SMS notifications, when a gesture indicative of a new SMS is recognized, the Arduino relays this information to the PC through the serial connection. The PC application, designed to monitor the Arduino's serial port, displays these SMS notifications in the serial monitor. This study offers an engaging and interactive means of interfacing with a PC by translating hand gestures into meaningful actions, opening up opportunities for intuitive computer control. Furthermore, the integration of SMS notifications adds a practical dimension to the system, notifying users of incoming messages as they interact with their computers. The use of MEMS sensors, Arduino, and serial communication serves as a promising foundation for expanding the capabilities of gesture-based control systems.Keywords: hand gestures, multiple cables, serial communication, sms notification
Procedia PDF Downloads 691977 The Effect of Power of Isolation Transformer on the Lamps in Airfield Ground Lighting Systems
Authors: Hossein Edrisi
Abstract:
To study the impact of the amount and volume of power of isolation transformer on the lamps in airfield Ground Lighting Systems. A test was conducted in Persian Gulf International Airport, This airport is situated in the south of Iran and it is one of the most cutting-edge airports, the same one that owns modern devices. Iran uses materials and auxiliary equipment which are made by ADB Company from Belgium. Airfield ground lighting (AGL) systems are responsible for providing visual issue to aircrafts and helicopters in the runways. In an AGL system a great deal of lamps are connected in serial circuits to each other and each ring has its individual constant current regulators (CCR), which through that provide energy to the lamps. Control of lamps is crucial for maintenance and operation in the AGL systems. Thanks to the Programmable Logic Controller (PLC) that is a cutting-edge technology can help the system to connect the elements from substations and ATC (TOWER). For this purpose, a test in real conditions of the airport done for all element that used in the airport such as isolation transformer in different power capacity and different consuming power and brightness of the lamps. The data were analyzed with Lux meter and Multimeter. The results had shown that the increase in the power of transformer caused a significant increase in brightness. According to the Ohm’s law and voltage division, without changing the characteristics of the light bulb, it is not possible to change the voltage, just need to change the amount of transformer with which it connects to the lamps. When the voltage is increased, the current through the bulb has to increase as well, because of Ohm's law: I=V/R and I=V/R which means that if V increases, so do I increase. The output voltage on the constant current regulator emerges between the lamps and the transformers.Keywords: AGL, CCR, lamps, transformer, Ohm’s law
Procedia PDF Downloads 2481976 Innovative In-Service Training Approach to Strengthen Health Care Human Resources and Scale-Up Detection of Mycobacterium tuberculosis
Authors: Tsegahun Manyazewal, Francesco Marinucci, Getachew Belay, Abraham Tesfaye, Gonfa Ayana, Amaha Kebede, Tsegahun Manyazewal, Francesco Marinucci, Getachew Belay, Abraham Tesfaye, Gonfa Ayana, Amaha Kebede, Yewondwossen Tadesse, Susan Lehman, Zelalem Temesgen
Abstract:
In-service health trainings in Sub-Saharan Africa are mostly content-centered with higher disconnection with the real practice in the facility. This study intended to evaluate in-service training approach aimed to strengthen health care human resources. A combined web-based and face-to-face training was designed and piloted in Ethiopia with the diagnosis of tuberculosis. During the first part, which lasted 43 days, trainees accessed web-based material and read without leaving their work; while the second part comprised a one-day hands-on evaluation. Trainee’s competency was measured using multiple-choice questions, written-assignments, exercises and hands-on evaluation. Of 108 participants invited, 81 (75%) attended the course and 71 (88%) of them successfully completed. Of those completed, 73 (90%) scored a grade from A to C. The approach was effective to transfer knowledge and turn it into practical skills. In-service health training should transform from a passive one-time-event to a continuous behavioral change of participants and improvements on their actual work.Keywords: Ethiopia, health care, Mycobacterium tuberculosis, training
Procedia PDF Downloads 5041975 Tomato-Weed Classification by RetinaNet One-Step Neural Network
Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri
Abstract:
The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.Keywords: deep learning, object detection, cnn, tomato, weeds
Procedia PDF Downloads 1031974 Rapid, Label-Free, Direct Detection and Quantification of Escherichia coli Bacteria Using Nonlinear Acoustic Aptasensor
Authors: Shilpa Khobragade, Carlos Da Silva Granja, Niklas Sandström, Igor Efimov, Victor P. Ostanin, Wouter van der Wijngaart, David Klenerman, Sourav K. Ghosh
Abstract:
Rapid, label-free and direct detection of pathogenic bacteria is critical for the prevention of disease outbreaks. This paper for the first time attempts to probe the nonlinear acoustic response of quartz crystal resonator (QCR) functionalized with specific DNA aptamers for direct detection and quantification of viable E. coli KCTC 2571 bacteria. DNA aptamers were immobilized through biotin and streptavidin conjugation, onto the gold surface of QCR to capture the target bacteria and the detection was accomplished by shift in amplitude of the peak 3f signal (3 times the drive frequency) upon binding, when driven near fundamental resonance frequency. The developed nonlinear acoustic aptasensor system demonstrated better reliability than conventional resonance frequency shift and energy dissipation monitoring that were recorded simultaneously. This sensing system could directly detect 10⁽⁵⁾ cells/mL target bacteria within 30 min or less and had high specificity towards E. coli KCTC 2571 bacteria as compared to the same concentration of S.typhi bacteria. Aptasensor response was observed for the bacterial suspensions ranging from 10⁽⁵⁾-10⁽⁸⁾ cells/mL. Conclusively, this nonlinear acoustic aptasensor is simple to use, gives real-time output, cost-effective and has the potential for rapid, specific, label-free direction detection of bacteria.Keywords: acoustic, aptasensor, detection, nonlinear
Procedia PDF Downloads 5671973 Development of an Auxetic Tissue Implant
Authors: Sukhwinder K. Bhullar, M. B. G. Jun
Abstract:
The developments in biomedical industry have demanded the development of biocompatible, high performance materials to meet higher engineering specifications. The general requirements of such materials are to provide a combination of high stiffness and strength with significant weight savings, resistance to corrosion, chemical resistance, low maintenance, and reduced costs. Auxetic materials which come under the category of smart materials offer huge potential through measured enhancements in mechanical properties. Unique deformation mechanism, providing cushioning on indentation, automatically adjustable with its strength and thickness in response to forces and having memory returns to its neutral state on dissipation of stresses make them good candidate in biomedical industry. As simple extension and compression of tissues is of fundamental importance in biomechanics, therefore, to study the elastic behaviour of auxetic soft tissues implant is targeted in this paper. Therefore development and characterization of auxetic soft tissue implant is studied in this paper. This represents a real life configuration where soft tissue such as meniscus in knee replacement, ligaments and tendons often are taken as transversely isotropic. Further, as composition of alternating polydisperse blocks of soft and stiff segments combined with excellent biocompatibility make polyurethanes one of the most promising synthetic biomaterials. Hence selecting auxetic polyurathylene foam functional characterization is performed and compared with conventional polyurathylene foam.Keywords: auxetic materials, deformation mechanism, enhanced mechanical properties, soft tissues
Procedia PDF Downloads 4591972 Progress in Combining Image Captioning and Visual Question Answering Tasks
Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima
Abstract:
Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.Keywords: image captioning, visual question answering, deep learning, natural language processing
Procedia PDF Downloads 731971 Refined Edge Detection Network
Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni
Abstract:
Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone
Procedia PDF Downloads 1021970 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling
Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo
Abstract:
Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield
Procedia PDF Downloads 4461969 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method
Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry
Abstract:
The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design
Procedia PDF Downloads 1521968 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1501967 A Mathematical Agent-Based Model to Examine Two Patterns of Language Change
Authors: Gareth Baxter
Abstract:
We use a mathematical model of language change to examine two recently observed patterns of language change: one in which most speakers change gradually, following the mean of the community change, and one in which most individuals use predominantly one variant or another, and change rapidly if they change at all. The model is based on Croft’s Utterance Selection account of language change, which views language change as an evolutionary process, in which different variants (different ‘ways of saying the same thing’) compete for usage in a population of speakers. Language change occurs when a new variant replaces an older one as the convention within a given population. The present model extends a previous simpler model to include effects related to speaker aging and interspeaker variation in behaviour. The two patterns of individual change (one more centralized and the other more polarized) were recently observed in historical language changes, and it was further observed that slower changes were more associated with the centralized pattern, while quicker changes were more polarized. Our model suggests that the two patterns of change can be explained by different balances between the preference of speakers to use one variant over another and the degree of accommodation to (propensity to adapt towards) other speakers. The correlation with the rate of change appears naturally in our model, and results from the fact that both differential weighting of variants and the degree of accommodation affect the time for change to occur, while also determining the patterns of change. This work represents part of an ongoing effort to examine phenomena in language change through the use of mathematical models. This offers another way to evaluate qualitative explanations that cannot be practically tested (or cannot be tested at all) in a real-world, large-scale speech community.Keywords: agent based modeling, cultural evolution, language change, social behavior modeling, social influence
Procedia PDF Downloads 2351966 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.Keywords: algorithm, LiDAR, object recognition, OBIA
Procedia PDF Downloads 2451965 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric
Authors: Geetika Barman, B. S. Daya Sagar
Abstract:
In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology
Procedia PDF Downloads 871964 Development of Automatic Laser Scanning Measurement Instrument
Authors: Chien-Hung Liu, Yu-Fen Chen
Abstract:
This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW
Procedia PDF Downloads 3601963 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules
Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang
Abstract:
Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor
Procedia PDF Downloads 5841962 Rapid and Efficient Removal of Lead from Water Using Chitosan/Magnetite Nanoparticles
Authors: Othman M. Hakami, Abdul Jabbar Al-Rajab
Abstract:
Occurrence of heavy metals in water resources increased in the recent years albeit at low concentrations. Lead (PbII) is among the most important inorganic pollutants in ground and surface water. However, removal of this toxic metal efficiently from water is of public and scientific concern. In this study, we developed a rapid and efficient removal method of lead from water using chitosan/magnetite nanoparticles. A simple and effective process has been used to prepare chitosan/magnetite nanoparticles (NPs) (CS/Mag NPs) with effect on saturation magnetization value; the particles were strongly responsive to an external magnetic field making separation from solution possible in less than 2 minutes using a permanent magnet and the total Fe in solution was below the detection limit of ICP-OES (<0.19 mg L-1). The hydrodynamic particle size distribution increased from an average diameter of ~60 nm for Fe3O4 NPs to ~75 nm after chitosan coating. The feasibility of the prepared NPs for the adsorption and desorption of Pb(II) from water were evaluated using Chitosan/Magnetite NPs which showed a high removal efficiency for Pb(II) uptake, with 90% of Pb(II) removed during the first 5 minutes and equilibrium in less than 10 minutes. Maximum adsorption capacities for Pb(II) occurred at pH 6.0 and under room temperature were as high as 85.5 mg g-1, according to Langmuir isotherm model. Desorption of adsorbed Pb on CS/Mag NPs was evaluated using deionized water at different pH values ranged from 1 to 7 which was an effective eluent and did not result the destruction of NPs, then, they could subsequently be reused without any loss of their activity in further adsorption tests. Overall, our results showed the high efficiency of chitosan/magnetite nanoparticles (NPs) in lead removal from water in controlled conditions, and further studies should be realized in real field conditions.Keywords: chitosan, magnetite, water, treatment
Procedia PDF Downloads 4041961 Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis
Authors: Myungsu Lim, William Xiu Shun Wong, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Namgyu Kim
Abstract:
The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena.Keywords: Data Mining, Issue Tracking, Text Mining, topic Analysis, topic Detection, Trend Detection
Procedia PDF Downloads 4081960 Effects of the In-Situ Upgrading Project in Afghanistan: A Case Study on the Formally and Informally Developed Areas in Kabul
Authors: Maisam Rafiee, Chikashi Deguchi, Akio Odake, Minoru Matsui, Takanori Sata
Abstract:
Cities in Afghanistan have been rapidly urbanized; however, many parts of these cities have been developed with no detailed land use plan or infrastructure. In other words, they have been informally developed without any government leadership. The new government started the In-situ Upgrading Project in Kabul to upgrade roads, the water supply network system, and the surface water drainage system on the existing street layout in 2002, with the financial support of international agencies. This project is an appropriate emergency improvement for living life, but not an essential improvement of living conditions and infrastructure problems because the life expectancies of the improved facilities are as short as 10–15 years, and residents cannot obtain land tenure in the unplanned areas. The Land Readjustment System (LRS) conducted in Japan has good advantages that rearrange irregularly shaped land lots and develop the infrastructure effectively. This study investigates the effects of the In-situ Upgrading Project on private investment, land prices, and residents’ satisfaction with projects in Kart-e-Char, where properties are registered, and in Afshar-e-Silo Lot 1, where properties are unregistered. These projects are located 5 km and 7 km from the CBD area of Kabul, respectively. This study discusses whether LRS should be applied to the unplanned area based on the questionnaire and interview responses of experts experienced in the In-situ Upgrading Project who have knowledge of LRS. The analysis results reveal that, in Kart-e-Char, a lot of private investment has been made in the construction of medium-rise (five- to nine-story) buildings for commercial and residential purposes. Land values have also incrementally increased since the project, and residents are commonly satisfied with the road pavement, drainage systems, and water supplies, but dissatisfied with the poor delivery of electricity as well as the lack of public facilities (e.g., parks and sport facilities). In Afshar-e-Silo Lot 1, basic infrastructures like paved roads and surface water drainage systems have improved from the project. After the project, a few four- and five-story residential buildings were built with very low-level private investments, but significant increases in land prices were not evident. The residents are satisfied with the contribution ratio, drainage system, and small increase in land price, but there is still no drinking water supply system or tenure security; moreover, there are substandard paved roads and a lack of public facilities, such as parks, sport facilities, mosques, and schools. The results of the questionnaire and interviews with the four engineers highlight the problems that remain to be solved in the unplanned areas if LRS is applied—namely, land use differences, types and conditions of the infrastructure still to be installed by the project, and time spent for positive consensus building among the residents, given the project’s budget limitation.Keywords: in-situ upgrading, Kabul city, land readjustment, land value, planned area, private investment, residents' satisfaction, unplanned area
Procedia PDF Downloads 2041959 Highly Glazed Office Spaces: Simulated Visual Comfort vs Real User Experiences
Authors: Zahra Hamedani, Ebrahim Solgi, Henry Skates, Gillian Isoardi
Abstract:
Daylighting plays a pivotal role in promoting productivity and user satisfaction in office spaces. There is an ongoing trend in designing office buildings with a high proportion of glazing which relatively increases the risk of high visual discomfort. Providing a more realistic lighting analysis can be of high value at the early stages of building design when necessary changes can be made at a very low cost. This holistic approach can be achieved by incorporating subjective evaluation and user behaviour in computer simulation and provide a comprehensive lighting analysis. In this research, a detailed computer simulation model has been made using Radiance and Daysim. Afterwards, this model was validated by measurements and user feedback. The case study building is the school of science at Griffith University, Gold Coast, Queensland, which features highly glazed office spaces. In this paper, the visual comfort predicted by the model is compared with a preliminary survey of the building users to evaluate how user behaviour such as desk position, orientation selection, and user movement caused by daylight changes and other visual variations can inform perceptions of visual comfort. This work supports preliminary design analysis of visual comfort incorporating the effects of gaze shift patterns and views with the goal of designing effective layout for office spaces.Keywords: lighting simulation, office buildings, user behaviour, validation, visual comfort
Procedia PDF Downloads 213