Search results for: edge detection algorithm
1301 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy
Authors: Lucia Liu, Matthew Koziol
Abstract:
Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation
Procedia PDF Downloads 571300 Radio-Frequency Technologies for Sensing and Imaging
Authors: Cam Nguyen
Abstract:
Rapid, accurate, and safe sensing and imaging of physical quantities or structures finds many applications and is of significant interest to society. Sensing and imaging using radio-frequency (RF) techniques, particularly, has gone through significant development and subsequently established itself as a unique territory in the sensing world. RF sensing and imaging has played a critical role in providing us many sensing and imaging abilities beyond our human capabilities, benefiting both civilian and military applications - for example, from sensing abnormal conditions underneath some structures’ surfaces to detection and classification of concealed items, hidden activities, and buried objects. We present the developments of several sensing and imaging systems implementing RF technologies like ultra-wide band (UWB), synthetic-pulse, and interferometry. These systems are fabricated completely using RF integrated circuits. The UWB impulse system operates over multiple pulse durations from 450 to 1170 ps with 5.5-GHz RF bandwidth. It performs well through tests of various samples, demonstrating its usefulness for subsurface sensing. The synthetic-pulse system operating from 0.6 to 5.6 GHz can assess accurately subsurface structures. The synthetic-pulse system operating from 29.72-37.7 GHz demonstrates abilities for various surface and near-surface sensing such as profile mapping, liquid-level monitoring, and anti-personnel mine locating. The interferometric system operating at 35.6 GHz demonstrates its multi-functional capability for measurement of displacements and slow velocities. These RF sensors are attractive and useful for various surface and subsurface sensing applications. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Keywords: RF sensors, radars, surface sensing, subsurface sensing
Procedia PDF Downloads 3161299 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)
Authors: Silvia Arrate, Waldo Salud, Eloy París
Abstract:
The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.Keywords: cutting tools, data science, prediction, TBM, wear
Procedia PDF Downloads 491298 The Economic Burden of Breast Cancer on Women in Nigeria: Implication for Socio-Economic Development
Authors: Tolulope Allo, Mofoluwake P. Ajayi, Adenike E. Idowu, Emmanuel O. Amoo, Fadeke Esther Olu-Owolabi
Abstract:
Breast cancer which was more prevalent in Europe and America in the past is gradually being mirrored across the world today with greater economic burden on low and middle income countries (LMCs). Breast cancer is the most common cancer among women globally and current studies have shown that a woman dies with the diagnosis of breast cancer every thirteen minutes. The economic cost of breast cancer is overwhelming particularly for developing economies. While it causes billion of dollar in losses of national income, it pushes millions of people below poverty line. This study examined the economic burden of breast cancer on Nigerian women, its impacts on their standard of living and its effects on Nigeria’s socio economic development. The study adopts a qualitative research approach using the in-depth interview technique to elicit valuable information from respondents with cancer experience from the Southern part of Nigeria. Respondents constituted women in their reproductive age (15-49 years) that have experienced and survived cancer and also those that are currently receiving treatment. Excerpts from the interviews revealed that the cost of treatment is one of the major factors contributing to the late presentation of breast cancer incidences among women as many of them could not afford to pay for their own treatment. The study also revealed that many women prefer to explore other options such as herbal treatments and spiritual consultations which is less expensive and affordable. The study therefore concludes that breast cancer diagnosis and treatment should be subsidized by the government in order to facilitate easy access and affordability thereby promoting early detection and reducing the economic burden of treatment on women.Keywords: breast cancer, development, economic burden, women
Procedia PDF Downloads 3581297 Hybrid Localization Schemes for Wireless Sensor Networks
Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir
Abstract:
This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks
Procedia PDF Downloads 4691296 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests
Procedia PDF Downloads 2891295 A Bayesian Parameter Identification Method for Thermorheological Complex Materials
Authors: Michael Anton Kraus, Miriam Schuster, Geralt Siebert, Jens Schneider
Abstract:
Polymers increasingly gained interest in construction materials over the last years in civil engineering applications. As polymeric materials typically show time- and temperature dependent material behavior, which is accounted for in the context of the theory of linear viscoelasticity. Within the context of this paper, the authors show, that some polymeric interlayers for laminated glass can not be considered as thermorheologically simple as they do not follow a simple TTSP, thus a methodology of identifying the thermorheologically complex constitutive bahavioir is needed. ‘Dynamical-Mechanical-Thermal-Analysis’ (DMTA) in tensile and shear mode as well as ‘Differential Scanning Caliometry’ (DSC) tests are carried out on the interlayer material ‘Ethylene-vinyl acetate’ (EVA). A navoel Bayesian framework for the Master Curving Process as well as the detection and parameter identification of the TTSPs along with their associated Prony-series is derived and applied to the EVA material data. To our best knowledge, this is the first time, an uncertainty quantification of the Prony-series in a Bayesian context is shown. Within this paper, we could successfully apply the derived Bayesian methodology to the EVA material data to gather meaningful Master Curves and TTSPs. Uncertainties occurring in this process can be well quantified. We found, that EVA needs two TTSPs with two associated Generalized Maxwell Models. As the methodology is kept general, the derived framework could be also applied to other thermorheologically complex polymers for parameter identification purposes.Keywords: bayesian parameter identification, generalized Maxwell model, linear viscoelasticity, thermorheological complex
Procedia PDF Downloads 2631294 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 3721293 Analysis of Taxonomic Compositions, Metabolic Pathways and Antibiotic Resistance Genes in Fish Gut Microbiome by Shotgun Metagenomics
Authors: Anuj Tyagi, Balwinder Singh, Naveen Kumar B. T., Niraj K. Singh
Abstract:
Characterization of diverse microbial communities in specific environment plays a crucial role in the better understanding of their functional relationship with the ecosystem. It is now well established that gut microbiome of fish is not the simple replication of microbiota of surrounding local habitat, and extensive species, dietary, physiological and metabolic variations in fishes may have a significant impact on its composition. Moreover, overuse of antibiotics in human, veterinary and aquaculture medicine has led to rapid emergence and propagation of antibiotic resistance genes (ARGs) in the aquatic environment. Microbial communities harboring specific ARGs not only get a preferential edge during selective antibiotic exposure but also possess the significant risk of ARGs transfer to other non-resistance bacteria within the confined environments. This phenomenon may lead to the emergence of habitat-specific microbial resistomes and subsequent emergence of virulent antibiotic-resistant pathogens with severe fish and consumer health consequences. In this study, gut microbiota of freshwater carp (Labeo rohita) was investigated by shotgun metagenomics to understand its taxonomic composition and functional capabilities. Metagenomic DNA, extracted from the fish gut, was subjected to sequencing on Illumina NextSeq to generate paired-end (PE) 2 x 150 bp sequencing reads. After the QC of raw sequencing data by Trimmomatic, taxonomic analysis by Kraken2 taxonomic sequence classification system revealed the presence of 36 phyla, 326 families and 985 genera in the fish gut microbiome. At phylum level, Proteobacteria accounted for more than three-fourths of total bacterial populations followed by Actinobacteria (14%) and Cyanobacteria (3%). Commonly used probiotic bacteria (Bacillus, Lactobacillus, Streptococcus, and Lactococcus) were found to be very less prevalent in fish gut. After sequencing data assembly by MEGAHIT v1.1.2 assembler and PROKKA automated analysis pipeline, pathway analysis revealed the presence of 1,608 Metacyc pathways in the fish gut microbiome. Biosynthesis pathways were found to be the most dominant (51%) followed by degradation (39%), energy-metabolism (4%) and fermentation (2%). Almost one-third (33%) of biosynthesis pathways were involved in the synthesis of secondary metabolites. Metabolic pathways for the biosynthesis of 35 antibiotic types were also present, and these accounted for 5% of overall metabolic pathways in the fish gut microbiome. Fifty-one different types of antibiotic resistance genes (ARGs) belonging to 15 antimicrobial resistance (AMR) gene families and conferring resistance against 24 antibiotic types were detected in fish gut. More than 90% ARGs in fish gut microbiome were against beta-lactams (penicillins, cephalosporins, penems, and monobactams). Resistance against tetracycline, macrolides, fluoroquinolones, and phenicols ranged from 0.7% to 1.3%. Some of the ARGs for multi-drug resistance were also found to be located on sequences of plasmid origin. The presence of pathogenic bacteria and ARGs on plasmid sequences suggested the potential risk due to horizontal gene transfer in the confined gut environment.Keywords: antibiotic resistance, fish gut, metabolic pathways, microbial diversity
Procedia PDF Downloads 1441292 Path Planning for Orchard Robot Using Occupancy Grid Map in 2D Environment
Authors: Satyam Raikwar, Thomas Herlitzius, Jens Fehrmann
Abstract:
In recent years, the autonomous navigation of orchard and field robots is an emerging technology of the mobile robotics in agriculture. One of the core aspects of autonomous navigation builds upon path planning, which is still a crucial issue. Generally, for simple representation, the path planning for a mobile robot is performed in a two-dimensional space, which creates a path between the start and goal point. This paper presents the automatic path planning approach for robots used in orchards and vineyards using occupancy grid maps with field consideration. The orchards and vineyards are usually structured environment and their topology is assumed to be constant over time; therefore, in this approach, an RGB image of a field is used as a working environment. These images undergone different image processing operations and then discretized into two-dimensional grid matrices. The individual grid or cell of these grid matrices represents the occupancy of the space, whether it is free or occupied. The grid matrix represents the robot workspace for motion and path planning. After the grid matrix is described, a probabilistic roadmap (PRM) path algorithm is used to create the obstacle-free path over these occupancy grids. The path created by this method was successfully verified in the test area. Furthermore, this approach is used in the navigation of the orchard robot.Keywords: orchard robots, automatic path planning, occupancy grid, probabilistic roadmap
Procedia PDF Downloads 1561291 A Data-Mining Model for Protection of FACTS-Based Transmission Line
Authors: Ashok Kalagura
Abstract:
This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines.Keywords: distance relaying, fault-zone identification, random forests, RFs, support vector machine, SVM, thyristor-controlled series compensator, TCSC, unified power-flow controller, UPFC
Procedia PDF Downloads 4231290 An Adaptive Distributed Incremental Association Rule Mining System
Authors: Adewale O. Ogunde, Olusegun Folorunso, Adesina S. Sodiya
Abstract:
Most existing Distributed Association Rule Mining (DARM) systems are still facing several challenges. One of such challenges that have not received the attention of many researchers is the inability of existing systems to adapt to constantly changing databases and mining environments. In this work, an Adaptive Incremental Mining Algorithm (AIMA) is therefore proposed to address these problems. AIMA employed multiple mobile agents for the entire mining process. AIMA was designed to adapt to changes in the distributed databases by mining only the incremental database updates and using this to update the existing rules in order to improve the overall response time of the DARM system. In AIMA, global association rules were integrated incrementally from one data site to another through Results Integration Coordinating Agents. The mining agents in AIMA were made adaptive by defining mining goals with reasoning and behavioral capabilities and protocols that enabled them to either maintain or change their goals. AIMA employed Java Agent Development Environment Extension for designing the internal agents’ architecture. Results from experiments conducted on real datasets showed that the adaptive system, AIMA performed better than the non-adaptive systems with lower communication costs and higher task completion rates.Keywords: adaptivity, data mining, distributed association rule mining, incremental mining, mobile agents
Procedia PDF Downloads 3931289 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector
Authors: Mayesha Tahsin, A.S. Mollah
Abstract:
Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient
Procedia PDF Downloads 1591288 CFD Analysis of an Aft Sweep Wing in Subsonic Flow and Making Analogy with Roskam Methods
Authors: Ehsan Sakhaei, Ali Taherabadi
Abstract:
In this study, an aft sweep wing with specific characteristic feature was analysis with CFD method in Fluent software. In this analysis wings aerodynamic coefficient was calculated in different rake angle and wing lift curve slope to rake angle was achieved. Wing section was selected among NACA airfoils version 6. The sweep angle of wing is 15 degree, aspect ratio 8 and taper ratios 0.4. Designing and modeling this wing was done in CATIA software. This model was meshed in Gambit software and its three dimensional analysis was done in Fluent software. CFD methods used here were based on pressure base algorithm. SIMPLE technique was used for solving Navier-Stokes equation and Spalart-Allmaras model was utilized to simulate three dimensional wing in air. Roskam method is one of the common and most used methods for determining aerodynamics parameters in the field of airplane designing. In this study besides CFD analysis, an advanced aircraft analysis was used for calculating aerodynamic coefficient using Roskam method. The results of CFD were compared with measured data acquired from Roskam method and authenticity of relation was evaluated. The results and comparison showed that in linear region of lift curve there is a minor difference between aerodynamics parameter acquired from CFD to relation present by Roskam.Keywords: aft sweep wing, CFD method, fluent, Roskam, Spalart-Allmaras model
Procedia PDF Downloads 5041287 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 1591286 SEM Detection of Folate Receptor in a Murine Breast Cancer Model Using Secondary Antibody-Conjugated, Gold-Coated Magnetite Nanoparticles
Authors: Yasser A. Ahmed, Juleen M Dickson, Evan S. Krystofiak, Julie A. Oliver
Abstract:
Cancer cells urgently need folate to support their rapid division. Folate receptors (FR) are over-expressed on a wide range of tumor cells, including breast cancer cells. FR are distributed over the entire surface of cancer cells, but are polarized to the apical surface of normal cells. Targeting of cancer cells using specific surface molecules such as folate receptors may be one of the strategies used to kill cancer cells without hurting the neighing normal cells. The aim of the current study was to try a method of SEM detecting FR in a murine breast cancer cell model (4T1 cells) using secondary antibody conjugated to gold or gold-coated magnetite nanoparticles. 4T1 cells were suspended in RPMI medium witth FR antibody and incubated with secondary antibody for fluorescence microscopy. The cells were cultured on 30mm Thermanox coverslips for 18 hours, labeled with FR antibody then incubated with secondary antibody conjugated to gold or gold-coated magnetite nanoparticles and processed to scanning electron microscopy (SEM) analysis. The fluorescence microscopy study showed strong punctate FR expression on 4T1 cell membrane. With SEM, the labeling with gold or gold-coated magnetite conjugates showed a similar pattern. Specific labeling occurred in nanoparticle clusters, which are clearly visualized in backscattered electron images. The 4T1 tumor cell model may be useful for the development of FR-targeted tumor therapy using gold-coated magnetite nano-particles.Keywords: cancer cell, nanoparticles, cell culture, SEM
Procedia PDF Downloads 7351285 Evaluation of Firearm Injury Syndromic Surveillance in Utah
Authors: E. Bennion, A. Acharya, S. Barnes, D. Ferrell, S. Luckett-Cole, G. Mower, J. Nelson, Y. Nguyen
Abstract:
Objective: This study aimed to evaluate the validity of a firearm injury query in the Early Notification of Community-based Epidemics syndromic surveillance system. Syndromic surveillance data are used at the Utah Department of Health for early detection of and rapid response to unusually high rates of violence and injury, among other health outcomes. The query of interest was defined by the Centers for Disease Control and Prevention and used chief complaint and discharge diagnosis codes to capture initial emergency department encounters for firearm injury of all intents. Design: Two epidemiologists manually reviewed electronic health records of emergency department visits captured by the query from April-May 2020, compared results, and sent conflicting determinations to two arbiters. Results: Of the 85 unique records captured, 67 were deemed probable, 19 were ruled out, and two were undetermined, resulting in a positive predictive value of 75.3%. Common reasons for false positives included non-initial encounters and misleading keywords. Conclusion: Improving the validity of syndromic surveillance data would better inform outbreak response decisions made by state and local health departments. The firearm injury definition could be refined to exclude non-initial encounters by negating words such as “last month,” “last week,” and “aftercare”; and to exclude non-firearm injury by negating words such as “pellet gun,” “air gun,” “nail gun,” “bullet bike,” and “exit wound” when a firearm is not mentioned.Keywords: evaluation, health information system, firearm injury, syndromic surveillance
Procedia PDF Downloads 1661284 Fatigue Crack Growth Rate Measurement by Means of Classic Method and Acoustic Emission
Authors: V. Mentl, V. Koula, P. Mazal, J. Volák
Abstract:
Nowadays, the acoustic emission is a widely recognized method of material damage investigation, mainly in cases of cracks initiation and growth observation and evaluation. This is highly important in structures, e.g. pressure vessels, large steam turbine rotors etc., applied both in classic and nuclear power plants. Nevertheless, the acoustic emission signals must be correlated with the real crack progress to be able to evaluate the cracks and their growth by this non-destructive technique alone in real situations and to reach reliable results when the assessment of the structures' safety and reliability is performed and also when the remaining lifetime should be evaluated. The main aim of this study was to propose a methodology for evaluation of the early manifestations of the fatigue cracks and their growth and thus to quantify the material damage by acoustic emission parameters. Specimens made of several steels used in the power producing industry were subjected to fatigue loading in the low- and high-cycle regimes. This study presents results of the crack growth rate measurement obtained by the classic compliance change method and the acoustic emission signal analysis. The experiments were realized in cooperation between laboratories of Brno University of Technology and West Bohemia University in Pilsen within the solution of the project of the Czech Ministry of Industry and Commerce: "A diagnostic complex for the detection of pressure media and material defects in pressure components of nuclear and classic power plants" and the project “New Technologies for Mechanical Engineering”.Keywords: fatigue, crack growth rate, acoustic emission, material damage
Procedia PDF Downloads 3711283 Identifying the Faces of colonialism: An Analysis of Gender Inequalities in Economic Participation in Pakistan through Postcolonial Feminist Lens
Authors: Umbreen Salim, Anila Noor
Abstract:
This paper analyses the influences and faces of colonialism in women’s participation in economic activity in postcolonial Pakistan, through postcolonial feminist economic lens. It is an attempt to probe the shifts in gender inequalities that have existed in three stages; pre-colonial, colonial, and postcolonial times in the Indo-Pak subcontinent. It delves into an inquiry of pre-colonial as it is imperative to understand the situation and context before colonisation in order to assess the deviations associated with its onset. Hence, in order to trace gender inequalities this paper analyses from Mughal Era (1526-1757) that existed before British colonisation, then, the gender inequalities that existed during British colonisation (1857- 1947) and the associated dynamics and changes in women’s vulnerabilities to participate in the economy are examined. Followed by, the postcolonial (1947 onwards) scenario of discriminations and oppressions faced by women. As part of the research methodology, primary and secondary data analysis was done. Analysis of secondary data including literary works and photographs was carried out, followed by primary data collection using ethnographic approaches and participatory tools to understand the presence of coloniality and gender inequalities embedded in the social structure through participant’s real-life stories. The data is analysed using feminist postcolonial analysis. Intersectionality has been a key tool of analysis as the paper delved into the gender inequalities through the class and caste lens briefly touching at religion. It is imperative to mention the significance of the study and very importantly the practical challenges as historical analysis of 18th and 19th century is involved. Most of the available work on history is produced by a) men and b) foreigners and mostly white authors. Since the historical analysis is mostly by men the gender analysis presented misses on many aspects of women’s issues and since the authors have been mostly white European gives it as Mohanty says, ‘under western eyes’ perspective. Whereas the edge of this paper is the authors’ deep attachment, belongingness as lived reality and work with women in Pakistan as postcolonial subjects, a better position to relate with the social reality and understand the phenomenon. The study brought some key results as gender inequalities existed before colonisation when women were hidden wheel of stable economy which was completely invisible. During the British colonisation, the vulnerabilities of women only increased and as compared to men their inferiority status further strengthened. Today, the postcolonial woman lives in deep-rooted effects of coloniality where she is divided in class and position within the class, and she has to face gender inequalities within household and in the market for economic participation. Gender inequalities have existed in pre-colonial, during colonisation and postcolonial times in Pakistan with varying dynamics, degrees and intensities for women whereby social class, caste and religion have been key factors defining the extent of discrimination and oppression. Colonialism may have physically ended but the coloniality remains and has its deep, broad and wide effects in increasing gender inequalities in women’s participation in the economy in Pakistan.Keywords: colonialism, economic participation, gender inequalities, women
Procedia PDF Downloads 2091282 Tectono-Stratigraphic Architecture, Depositional Systems and Salt Tectonics to Strike-Slip Faulting in Kribi-Campo-Cameroon Atlantic Margin with an Unsupervised Machine Learning Approach (West African Margin)
Authors: Joseph Bertrand Iboum Kissaaka, Charles Fonyuy Ngum Tchioben, Paul Gustave Fowe Kwetche, Jeannette Ngo Elogan Ntem, Joseph Binyet Njebakal, Ribert Yvan Makosso-Tchapi, François Mvondo Owono, Marie Joseph Ntamak-Nida
Abstract:
Located in the Gulf of Guinea, the Kribi-Campo sub-basin belongs to the Aptian salt basins along the West African Margin. In this paper, we investigated the tectono-stratigraphic architecture of the basin, focusing on the role of salt tectonics and strike-slip faults along the Kribi Fracture Zone with implications for reservoir prediction. Using 2D seismic data and well data interpreted through sequence stratigraphy with integrated seismic attributes analysis with Python Programming and unsupervised Machine Learning, at least six second-order sequences, indicating three main stages of tectono-stratigraphic evolution, were determined: pre-salt syn-rift, post-salt rift climax and post-rift stages. The pre-salt syn-rift stage with KTS1 tectonosequence (Barremian-Aptian) reveals a transform rifting along NE-SW transfer faults associated with N-S to NNE-SSW syn-rift longitudinal faults bounding a NW-SE half-graben filled with alluvial to lacustrine-fan delta deposits. The post-salt rift-climax stage (Lower to Upper Cretaceous) includes two second-order tectonosequences (KTS2 and KTS3) associated with the salt tectonics and Campo High uplift. During the rift-climax stage, the growth of salt diapirs developed syncline withdrawal basins filled by early forced regression, mid transgressive and late normal regressive systems tracts. The early rift climax underlines some fine-grained hangingwall fans or delta deposits and coarse-grained fans from the footwall of fault scarps. The post-rift stage (Paleogene to Neogene) contains at least three main tectonosequences KTS4, KTS5 and KTS6-7. The first one developed some turbiditic lobe complexes considered as mass transport complexes and feeder channel-lobe complexes cutting the unstable shelf edge of the Campo High. The last two developed submarine Channel Complexes associated with lobes towards the southern part and braided delta to tidal channels towards the northern part of the Kribi-Campo sub-basin. The reservoir distribution in the Kribi-Campo sub-basin reveals some channels, fan lobes reservoirs and stacked channels reaching up to the polygonal fault systems.Keywords: tectono-stratigraphic architecture, Kribi-Campo sub-basin, machine learning, pre-salt sequences, post-salt sequences
Procedia PDF Downloads 561281 Urban Land Use Type Analysis Based on Land Subsidence Areas Using X-Band Satellite Image of Jakarta Metropolitan City, Indonesia
Authors: Ratih Fitria Putri, Josaphat Tetuko Sri Sumantyo, Hiroaki Kuze
Abstract:
Jakarta Metropolitan City is located on the northwest coast of West Java province with geographical location between 106º33’ 00”-107º00’00”E longitude and 5º48’30”-6º24’00”S latitude. Jakarta urban area has been suffered from land subsidence in several land use type as trading, industry and settlement area. Land subsidence hazard is one of the consequences of urban development in Jakarta. This hazard is caused by intensive human activities in groundwater extraction and land use mismanagement. Geologically, the Jakarta urban area is mostly dominated by alluvium fan sediment. The objectives of this research are to make an analysis of Jakarta urban land use type on land subsidence zone areas. The process of producing safer land use and settlements of the land subsidence areas are very important. Spatial distributions of land subsidence detection are necessary tool for land use management planning. For this purpose, Differential Synthetic Aperture Radar Interferometry (DInSAR) method is used. The DInSAR is complementary to ground-based methods such as leveling and global positioning system (GPS) measurements, yielding information in a wide coverage area even when the area is inaccessible. The data were fine tuned by using X-Band image satellite data from 2010 to 2013 and land use mapping data. Our analysis of land use type that land subsidence movement occurred on the northern part Jakarta Metropolitan City varying from 7.5 to 17.5 cm/year as industry and settlement land use type areas.Keywords: land use analysis, land subsidence mapping, urban area, X-band satellite image
Procedia PDF Downloads 2761280 City-Wide Simulation on the Effects of Optimal Appliance Scheduling in a Time-of-Use Residential Environment
Authors: Rudolph Carl Barrientos, Juwaln Diego Descallar, Rainer James Palmiano
Abstract:
Household Appliance Scheduling Systems (HASS) coupled with a Time-of-Use (TOU) pricing scheme, a form of Demand Side Management (DSM), is not widely utilized in the Philippines’ residential electricity sector. This paper’s goal is to encourage distribution utilities (DUs) to adopt HASS and TOU by analyzing the effect of household schedulers on the electricity price and load profile in a residential environment. To establish this, a city based on an implemented survey is generated using Monte Carlo Analysis (MCA). Then, a Binary Particle Swarm Optimization (BPSO) algorithm-based HASS is developed considering user satisfaction, electricity budget, appliance prioritization, energy storage systems, solar power, and electric vehicles. The simulations were assessed under varying levels of user compliance. Results showed that the average electricity cost, peak demand, and peak-to-average ratio (PAR) of the city load profile were all reduced. Therefore, the deployment of the HASS and TOU pricing scheme is beneficial for both stakeholders.Keywords: appliance scheduling, DSM, TOU, BPSO, city-wide simulation, electric vehicle, appliance prioritization, energy storage system, solar power
Procedia PDF Downloads 991279 Assessment of Airtightness Through a Standardized Procedure in a Nearly-Zero Energy Demand House
Authors: Mar Cañada Soriano, Rafael Royo-Pastor, Carolina Aparicio-Fernández, Jose-Luis Vivancos
Abstract:
The lack of insulation, along with the existence of air leakages, constitute a meaningful impact on the energy performance of buildings. Both of them lead to increases in the energy demand through additional heating and/or cooling loads. Additionally, they cause thermal discomfort. In order to quantify these uncontrolled air currents, pressurization and depressurization tests can be performed. Among them, the Blower Door test is a standardized procedure to determine the airtightness of a space which characterizes the rate of air leakages through the envelope surface, calculating to this purpose an air flow rate indicator. In this sense, the low-energy buildings complying with the Passive House design criteria are required to achieve high levels of airtightness. Due to the invisible nature of air leakages, additional tools are often considered to identify where the infiltrations take place. Among them, the infrared thermography entails a valuable technique to this purpose since it enables their detection. The aim of this study is to assess the airtightness of a typical Mediterranean dwelling house located in the Valencian orchad (Spain) restored under the Passive House standard using to this purpose the blower-door test. Moreover, the building energy performance modelling tools TRNSYS (TRaNsient System Simulation program) and TRNFlow (TRaNsient Flow) have been used to determine its energy performance, and the infiltrations’ identification was carried out by means of infrared thermography. The low levels of infiltrations obtained suggest that this house may comply with the Passive House standard.Keywords: airtightness, blower door, trnflow, infrared thermography
Procedia PDF Downloads 1231278 Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles
Authors: Zehai Yu, Hui Zhu, Linglong Lin, Huawei Liang, Biao Yu, Weixin Huang
Abstract:
With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.Keywords: curve fitting, lane-level road map, line recognition, multi-thresholding, two-stage clustering
Procedia PDF Downloads 1281277 Performance Evaluation of Dynamic Signal Control System for Mixed Traffic Conditions
Authors: Aneesh Babu, S. P. Anusha
Abstract:
A dynamic signal control system combines traditional traffic lights with an array of sensors to intelligently control vehicle and pedestrian traffic. The present study focus on evaluating the performance of dynamic signal control systems for mixed traffic conditions. Data collected from four different approaches to a typical four-legged signalized intersection at Trivandrum city in the Kerala state of India is used for the study. Performance of three other dynamic signal control methods, namely (i) Non-sequential method (ii) Webster design for consecutive signal cycle using flow as input, and (iii) dynamic signal control using RFID delay as input, were evaluated. The evaluation of the dynamic signal control systems was carried out using a calibrated VISSIM microsimulation model. Python programming was used to integrate the dynamic signal control algorithm through the COM interface in VISSIM. The intersection delay obtained from different dynamic signal control methods was compared with the delay obtained from fixed signal control. Based on the study results, it was observed that the intersection delay was reduced significantly by using dynamic signal control methods. The dynamic signal control method using delay from RFID sensors resulted in a higher percentage reduction in delay and hence is a suitable choice for implementation under mixed traffic conditions. The developed dynamic signal control strategies can be implemented in ITS applications under mixed traffic conditions.Keywords: dynamic signal control, intersection delay, mixed traffic conditions, RFID sensors
Procedia PDF Downloads 1071276 An Effective Decision-Making Strategy Based on Multi-Objective Optimization for Commercial Vehicles in Highway Scenarios
Authors: Weiming Hu, Xu Li, Xiaonan Li, Zhong Xu, Li Yuan, Xuan Dong
Abstract:
Maneuver decision-making plays a critical role in high-performance intelligent driving. This paper proposes a risk assessment-based decision-making network (RADMN) to address the problem of driving strategy for the commercial vehicle. RADMN integrates two networks, aiming at identifying the risk degree of collision and rollover and providing decisions to ensure the effectiveness and reliability of driving strategy. In the risk assessment module, risk degrees of the backward collision, forward collision and rollover are quantified for hazard recognition. In the decision module, a deep reinforcement learning based on multi-objective optimization (DRL-MOO) algorithm is designed, which comprehensively considers the risk degree and motion states of each traffic participant. To evaluate the performance of the proposed framework, Prescan/Simulink joint simulation was conducted in highway scenarios. Experimental results validate the effectiveness and reliability of the proposed RADMN. The output driving strategy can guarantee the safety and provide key technical support for the realization of autonomous driving of commercial vehicles.Keywords: decision-making strategy, risk assessment, multi-objective optimization, commercial vehicle
Procedia PDF Downloads 1341275 Arduino Pressure Sensor Cushion for Tracking and Improving Sitting Posture
Authors: Andrew Hwang
Abstract:
The average American worker sits for thirteen hours a day, often with poor posture and infrequent breaks, which can lead to health issues and back problems. The Smart Cushion was created to alert individuals of their poor postures, and may potentially alleviate back problems and correct poor posture. The Smart Cushion is a portable, rectangular, foam cushion, with five strategically placed pressure sensors, that utilizes an Arduino Uno circuit board and specifically designed software, allowing it to collect data from the five pressure sensors and store the data on an SD card. The data is then compiled into graphs and compared to controlled postures. Before volunteers sat on the cushion, their levels of back pain were recorded on a scale from 1-10. Data was recorded for an hour during sitting, and then a new, corrected posture was suggested. After using the suggested posture for an hour, the volunteers described their level of discomfort on a scale from 1-10. Different patterns of sitting postures were generated that were able to serve as early warnings of potential back problems. By using the Smart Cushion, the areas where different volunteers were applying the most pressure while sitting could be identified, and the sitting postures could be corrected. Further studies regarding the relationships between posture and specific regions of the body are necessary to better understand the origins of back pain; however, the Smart Cushion is sufficient for correcting sitting posture and preventing the development of additional back pain.Keywords: Arduino Sketch Algorithm, biomedical technology, pressure sensors, Smart Cushion
Procedia PDF Downloads 1341274 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel
Authors: Nixon Kuruvila, H. V. Ravindra
Abstract:
Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)
Procedia PDF Downloads 4091273 16s rRNA Based Metagenomic Analysis of Palm Sap Samples From Bangladesh
Authors: Ágota Ábrahám, Md Nurul Islam, Karimane Zeghbib, Gábor Kemenesi, Sazeda Akter
Abstract:
Collecting palm sap as a food source is an everyday practice in some parts of the world. However, the consumption of palm juice has been associated with regular infections and epidemics in parts of Bangladesh. This is attributed to fruit-eating bats and other vertebrates or invertebrates native to the area, contaminating the food with their body secretions during the collection process. The frequent intake of palm juice, whether as a processed food product or in its unprocessed form, is a common phenomenon in large areas. The range of pathogens suitable for human infection resulting from this practice is not yet fully understood. Additionally, the high sugar content of the liquid makes it an ideal culture medium for certain bacteria, which can easily propagate and potentially harm consumers. Rapid diagnostics, especially in remote locations, could mitigate health risks associated with palm juice consumption. The primary objective of this research is the rapid genomic detection and risk assessment of bacteria that may cause infections in humans through the consumption of palm juice. Utilizing state-of-the-art third-generation Nanopore metagenomic sequencing technology based on 16S rRNA, and identified bacteria primarily involved in fermenting processes. The swift metagenomic analysis, coupled with the widespread availability and portability of Nanopore products (including real-time analysis options), proves advantageous for detecting harmful pathogens in food sources without relying on extensive industry resources and testing.Keywords: raw date palm sap, NGS, metabarcoding, food safety
Procedia PDF Downloads 561272 Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans
Authors: H.-H. Doh, J.-M. Yu, Y.-J. Kwon, J.-H. Shin, H.-W. Kim, S.-H. Nam, D.-H. Lee
Abstract:
This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i. e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.Keywords: flexible job shop scheduling, decision tree, priority rules, case study
Procedia PDF Downloads 358