Search results for: age-sex accuracy index
137 Spatial Variation in Urbanization and Slum Development in India: Issues and Challenges in Urban Planning
Authors: Mala Mukherjee
Abstract:
Background: India is urbanizing very fast and urbanisation in India is treated as one of the most crucial components of economic growth. Though the pace of urbanisation (31.6 per cent in 2011) is however slower and lower than the average for Asia but the absolute number of people residing in cities and towns has increased substantially. Rapid urbanization leads to urban poverty and it is well represented in slums. Currently India has four metropolises and 53 million plus cities. All of them have significant slum population but the standard of living and success of slum development programmes varies across regions. Objectives: Objectives of the paper are to show how urbanisation and slum development varies across space; to show spatial variation in the standard of living in Indian slums; to analyse how the implementation of slum development policies like JNNURM, Rajiv Awas Yojana varies across cities and bring different results in different regions and what are the factors responsible for such variation. Data Sources and Methodology: Census 2011 data on urban population and slum households and amenities have been used for analysing the regional variation of urbanisation in 53 million plus cities of India. Special focus has been put on Kolkata Metropolitan Area. Statistical techniques like z-score and PCA have been employed to work out Standard of Living Deprivation score for all the slums of 53 metropolises. ARC-GIS software is used for making maps. Standard of living has been measured in terms of access to basic amenities, infrastructure and assets like drinking water, sanitation, housing condition, bank account, and so on. Findings: 1. The first finding reveals that migration and urbanization is very high in Greater Mumbai, Delhi, Bangaluru, Chennai, Hyderabad and Kolkata; but slum population is high in Greater Mumbai (50% population live in slums), Meerut, Faridabad, Ludhiana, Nagpur, Kolkata etc. Though the rate of urbanization is high in southern and western states but the percentage of slum population is high in northern states (except Greater Mumbai). 2. Standard of Living also varies widely. Slums of Greater Mumbai and North Indian Cities score fairly high in the index indicating the fact that standard of living is high in those slums compare to the slums in eastern India (Dhanbad, Jamshedpur, Kolkata). Therefore, though Kolkata have relatively lesser percentage of slum population compare to north and south Indian cities but the standard of living in Kolkata’s slums is deplorable. 3. It is interesting to note that even within Kolkata Metropolitan Area slums located in the southern and eastern municipal towns like Rajpur-Sonarpur, Pujali, Diamond Harbour, Baduria and Dankuni have lower standard of living compare to the slums located in the Hooghly Industrial belt like Titagarh, Rishrah, Srerampore etc. Slums of the Hooghly Industrial Belt are older than the slums located in eastern and southern part of the urban agglomeration. 4. Therefore, urban development and emergence of slums should not be the only issue of urban governance but standard of living should be the main focus. Slums located in the main cities like Delhi, Mumbai, Kolkata get more attention from the urban planners and similarly, older slums in a city receives greater political attention compare to the slums of smaller cities and newly emerged slums of the peripheral parts.Keywords: urbanisation, slum, spatial variation, India
Procedia PDF Downloads 359136 Dietary Exposure Assessment of Potentially Toxic Trace Elements in Fruits and Vegetables Grown in Akhtala, Armenia
Authors: Davit Pipoyan, Meline Beglaryan, Nicolò Merendino
Abstract:
Mining industry is one of the priority sectors of Armenian economy. Along with the solution of some socio-economic development, it brings about numerous environmental problems, especially toxic element pollution, which largely influences the safety of agricultural products. In addition, accumulation of toxic elements in agricultural products, mainly in edible parts of plants represents a direct pathway for their penetration into the human food chain. In Armenia, the share of plant origin food in overall diet is significantly high, so estimation of dietary intakes of toxic trace elements via consumption of selected fruits and vegetables are of great importance for observing the underlying health risks. Therefore, the present study was aimed to assess dietary exposure of potentially toxic trace elements through the intake of locally grown fruits and vegetables in Akhtala community (Armenia), where not only mining industry is developed, but also cultivation of fruits and vegetables. Moreover, this investigation represents one of the very first attempts to estimate human dietary exposure of potentially toxic trace elements in the study area. Samples of some commonly grown fruits and vegetables (fig, cornel, raspberry, grape, apple, plum, maize, bean, potato, cucumber, onion, greens) were randomly collected from several home gardens located near mining areas in Akhtala community. The concentration of Cu, Mo, Ni, Cr, Pb, Zn, Hg, As and Cd in samples were determined by using an atomic absorption spectrophotometer (AAS). Precision and accuracy of analyses were guaranteed by repeated analysis of samples against NIST Standard Reference Materials. For a diet study, individual-based approach was used, so the consumption of selected fruits and vegetables was investigated through food frequency questionnaire (FFQ). Combining concentration data with contamination data, the estimated daily intakes (EDI) and cumulative daily intakes were assessed and compared with health-based guidance values (HBGVs). According to the determined concentrations of the studied trace elements in fruits and vegetables, it can be stressed that some trace elements (Cu, Ni, Pb, Zn) among the majority of samples exceeded maximum allowable limits set by international organizations. Meanwhile, others (Cr, Hg, As, Cd, Mo) either did not exceed these limits or still do not have established allowable limits. The obtained results indicated that only for Cu the EDI values exceeded dietary reference intake (0.01 mg/kg/Bw/day) for some investigated fruits and vegetables in decreasing order of potato > grape > bean > raspberry > fig > greens. In contrast to this, for combined consumption of selected fruits and vegetables estimated cumulative daily intakes exceeded reference doses in the following sequence: Zn > Cu > Ni > Mo > Pb. It may be concluded that habitual and combined consumption of the above mentioned fruits and vegetables can pose a health risk to the local population. Hence, further detailed studies are needed for the overall assessment of potential health implications taking into consideration adverse health effects posed by more than one toxic trace element.Keywords: daily intake, dietary exposure, fruits, trace elements, vegetables
Procedia PDF Downloads 299135 The Effects of Circadian Rhythms Change in High Latitudes
Authors: Ekaterina Zvorykina
Abstract:
Nowadays, Arctic and Antarctic regions are distinguished to be one of the most important strategic resources for global development. Nonetheless, living conditions in Arctic regions still demand certain improvements. As soon as the region is rarely populated, one of the main points of interest is health accommodation of the people, who migrate to Arctic region for permanent and shift work. At Arctic and Antarctic latitudes, personnel face polar day and polar night conditions during the time of the year. It means that they are deprived of natural sunlight in winter season and have continuous daylight in summer. Firstly, the change in light intensity during 24-hours period due to migration affects circadian rhythms. Moreover, the controlled artificial light in winter is also an issue. The results of the recent studies on night shift medical professionals, who were exposed to permanent artificial light, have already demonstrated higher risks in cancer, depression, Alzheimer disease. Moreover, people exposed to frequent time zones change are also subjected to higher risks of heart attack and cancer. Thus, our main goals are to understand how high latitude work and living conditions can affect human health and how it can be prevented. In our study, we analyze molecular and cellular factors, which play important role in circadian rhythm change and distinguish main risk groups in people, migrating to high latitudes. The main well-studied index of circadian timing is melatonin or its metabolite 6-sulfatoxymelatonin. In low light intensity melatonin synthesis is disturbed and as a result human organism requires more time for sleep, which is still disregarded when it comes to working time organization. Lack of melatonin also causes shortage in serotonin production, which leads to higher depression risk. Melatonin is also known to inhibit oncogenes and increase apoptosis level in cells, the main factors for tumor growth, as well as circadian clock genes (for example Per2). Thus, people who work in high latitudes can be distinguished as a risk group for cancer diseases and demand more attention. Clock/Clock genes, known to be one of the main circadian clock regulators, decrease sensitivity of hypothalamus to estrogen and decrease glucose sensibility, which leads to premature aging and oestrous cycle disruption. Permanent light exposure also leads to accumulation superoxide dismutase and oxidative stress, which is one of the main factors for early dementia and Alzheimer disease. We propose a new screening system adjusted for people, migrating from middle to high latitudes and accommodation therapy. Screening is focused on melatonin and estrogen levels, sleep deprivation and neural disorders, depression level, cancer risks and heart and vascular disorders. Accommodation therapy includes different types artificial light exposure, additional melatonin and neuroprotectors. Preventive procedures can lead to increase of migration intensity to high latitudes and, as a result, the prosperity of Arctic region.Keywords: circadian rhythm, high latitudes, melatonin, neuroprotectors
Procedia PDF Downloads 155134 A Rapid and Greener Analysis Approach Based on Carbonfiber Column System and MS Detection for Urine Metabolomic Study After Oral Administration of Food Supplements
Authors: Zakia Fatima, Liu Lu, Donghao Li
Abstract:
The analysis of biological fluid metabolites holds significant importance in various areas, such as medical research, food science, and public health. Investigating the levels and distribution of nutrients and their metabolites in biological samples allows researchers and healthcare professionals to determine nutritional status, find hypovitaminosis or hypervitaminosis, and monitor the effectiveness of interventions such as dietary supplementation. Moreover, analysis of nutrient metabolites provides insight into their metabolism, bioavailability, and physiological processes, aiding in the clarification of their health roles. Hence, the exploration of a distinct, efficient, eco-friendly, and simpler methodology is of great importance to evaluate the metabolic content of complex biological samples. In this work, a green and rapid analytical method based on an automated online two-dimensional microscale carbon fiber/activated carbon fiber fractionation system and time-of-flight mass spectrometry (2DμCFs-TOF-MS) was used to evaluate metabolites of urine samples after oral administration of food supplements. The automated 2DμCFs instrument consisted of a microcolumn system with bare carbon fibers and modified carbon fiber coatings. Carbon fibers and modified carbon fibers exhibit different surface characteristics and retain different compounds accordingly. Three kinds of mobile-phase solvents were used to elute the compounds of varied chemical heterogeneities. The 2DμCFs separation system has the ability to effectively separate different compounds based on their polarity and solubility characteristics. No complicated sample preparation method was used prior to analysis, which makes the strategy more eco-friendly, practical, and faster than traditional analysis methods. For optimum analysis results, mobile phase composition, flow rate, and sample diluent were optimized. Water-soluble vitamins, fat-soluble vitamins, and amino acids, as well as 22 vitamin metabolites and 11 vitamin metabolic pathway-related metabolites, were found in urine samples. All water-soluble vitamins except vitamin B12 and vitamin B9 were detected in urine samples. However, no fat-soluble vitamin was detected, and only one metabolite of Vitamin A was found. The comparison with a blank urine sample showed a considerable difference in metabolite content. For example, vitamin metabolites and three related metabolites were not detected in blank urine. The complete single-run screening was carried out in 5.5 minutes with the minimum consumption of toxic organic solvent (0.5 ml). The analytical method was evaluated in terms of greenness, with an analytical greenness (AGREE) score of 0.72. The method’s practicality has been investigated using the Blue Applicability Grade Index (BAGI) tool, obtaining a score of 77. The findings in this work illustrated that the 2DµCFs-TOF-MS approach could emerge as a fast, sustainable, practical, high-throughput, and promising analytical tool for screening and accurate detection of various metabolites, pharmaceuticals, and ingredients in dietary supplements as well as biological fluids.Keywords: metabolite analysis, sustainability, carbon fibers, urine.
Procedia PDF Downloads 24133 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students
Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara
Abstract:
BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to: cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer
Procedia PDF Downloads 112132 Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection
Authors: Petra Hypšová, Martin Seitl
Abstract:
One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies.Keywords: emotional expression decoding, eye-tracking, functional infrared thermal imaging, non-contact deception detection, psychophysiological experiment
Procedia PDF Downloads 98131 Influence of the Local External Pressure on Measured Parameters of Cutaneous Microcirculation
Authors: Irina Mizeva, Elena Potapova, Viktor Dremin, Mikhail Mezentsev, Valeri Shupletsov
Abstract:
The local tissue perfusion is regulated by the microvascular tone which is under the control of a number of physiological mechanisms. Laser Doppler flowmetry (LDF) together with wavelet analyses is the most commonly used technique to study the regulatory mechanisms of cutaneous microcirculation. External factors such as temperature, local pressure of the probe on the skin, etc. influence on the blood flow characteristics and are used as physiological tests to evaluate microvascular regulatory mechanisms. Local probe pressure influences on the microcirculation parameters measured by optical methods: diffuse reflectance spectroscopy, fluorescence spectroscopy, and LDF. Therefore, further study of probe pressure effects can be useful to improve the reliability of optical measurement. During pressure tests variation of the mean perfusion measured by means of LDF usually is estimated. An additional information concerning the physiological mechanisms of the vascular tone regulation system in response to local pressure can be obtained using spectral analyses of LDF samples. The aim of the present work was to develop protocol and algorithm of data processing appropriate for study physiological response to the local pressure test. Involving 6 subjects (20±2 years) and providing 5 measurements for every subject we estimated intersubject and-inter group variability of response of both averaged and oscillating parts of the LDF sample on external surface pressure. The final purpose of the work was to find special features which further can be used in wider clinic studies. The cutaneous perfusion measurements were carried out by LAKK-02 (SPE LAZMA Ltd., Russia), the skin loading was provided by the originally designed device which allows one to distribute the pressure around the LDF probe. The probe was installed on the dorsal part of the distal finger of the index figure. We collected measurements continuously for one hour and varied loading from 0 to 180mmHg stepwise with a step duration of 10 minutes. Further, we post-processed the samples using the wavelet transform and traced the energy of oscillations in five frequency bands over time. Weak loading leads to pressure-induced vasodilation, so one should take into account that the perfusion measured under pressure conditions will be overestimated. On the other hand, we revealed a decrease in endothelial associated fluctuations. Further loading (88 mmHg) induces amplification of pulsations in all frequency bands. We assume that such loading leads to a higher number of closed capillaries, higher input of arterioles in the LDF signal and as a consequence more vivid oscillations which mainly are formed in arterioles. External pressure higher than 144 mmHg leads to the decrease of oscillating components, after removing the loading very rapid restore of the tissue perfusion takes place. In this work, we have demonstrated that local skin loading influence on the microcirculation parameters measured by optic technique; this should be taken into account while developing portable electronic devices. The proposed protocol of local loading allows one to evaluate PIV as far as to trace dynamic of blood flow oscillations. This study was supported by the Russian Science Foundation under project N 18-15-00201.Keywords: blood microcirculation, laser Doppler flowmetry, pressure-induced vasodilation, wavelet analyses blood
Procedia PDF Downloads 150130 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine
Authors: D. Madhushanka, Y. Liu, H. C. Fernando
Abstract:
Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2
Procedia PDF Downloads 233129 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction
Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer
Abstract:
History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19
Procedia PDF Downloads 173128 A Novel PWM/PFM Controller for PSR Fly-Back Converter Using a New Peak Sensing Technique
Authors: Sanguk Nam, Van Ha Nguyen, Hanjung Song
Abstract:
For low-power applications such as adapters for portable devices and USB chargers, the primary side regulation (PSR) fly-back converter is widely used in lieu of the conventional fly-back converter using opto-coupler because of its simpler structure and lower cost. In the literature, there has been studies focusing on the design of PSR circuit; however, the conventional sensing method in PSR circuit using RC delay has a lower accuracy as compared to the conventional fly-back converter using opto-coupler. In this paper, we propose a novel PWM/PFM controller using new sensing technique for the PSR fly-back converter which can control an accurate output voltage. The conventional PSR circuit can sense the output voltage information from the auxiliary winding to regulate the duty cycle of the clock that control the output voltage. In the sensing signal waveform, there has two transient points at time the voltage equals to Vout+VD and Vout, respectively. In other to sense the output voltage, the PSR circuit must detect the time at which the current of the diode at the output equals to zero. In the conventional PSR flyback-converter, the sensing signal at this time has a non-sharp-negative slope that might cause a difficulty in detecting the output voltage information since a delay of sensing signal or switching clock may exist which brings out an unstable operation of PSR fly-back converter. In this paper instead of detecting output voltage at a non-sharp-negative slope, a sharp-positive slope is used to sense the proper information of the output voltage. The proposed PRS circuit consists of a saw-tooth generator, a summing circuit, a sample and hold circuit and a peak detector. Besides, there is also the start-up circuit which protects the chip from high surge current when the converter is turned on. Additionally, to reduce the standby power loss, a second mode which operates in a low frequency is designed beside the main mode at high frequency. In general, the operation of the proposed PSR circuit can be summarized as following: At the time the output information is sensed from the auxiliary winding, a saw-tooth signal from the saw-tooth generator is generated. Then, both of these signals are summed using a summing circuit. After this process, the slope of the peak of the sensing signal at the time diode current is zero becomes positive and sharp that make the peak easy to detect. The output of the summing circuit then is fed into a peak detector and the sample and hold circuit; hence, the output voltage can be properly sensed. By this way, we can sense more accurate output voltage information and extend margin even circuit is delayed or even there is the existence of noise by using only a simple circuit structure as compared with conventional circuits while the performance can be sufficiently enhanced. Circuit verification was carried out using 0.35μm 700V Magnachip process. The simulation result of sensing signal shows a maximum error of 5mV under various load and line conditions which means the operation of the converter is stable. As compared to the conventional circuit, we achieved very small error only used analog circuits compare with conventional circuits. In this paper, a PWM/PFM controller using a simple and effective sensing method for PSR fly-back converter has been presented in this paper. The circuit structure is simple as compared with the conventional designs. The gained results from simulation confirmed the idea of the designKeywords: primary side regulation, PSR, sensing technique, peak detector, PWM/PFM control, fly-back converter
Procedia PDF Downloads 337127 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 69126 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad
Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood
Abstract:
Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.Keywords: CALPUFF, daura refinery, Iraq, pollutants
Procedia PDF Downloads 196125 Optimization of Metal Pile Foundations for Solar Power Stations Using Cone Penetration Test Data
Authors: Adrian Priceputu, Elena Mihaela Stan
Abstract:
Our research addresses a critical challenge in renewable energy: improving efficiency and reducing the costs associated with the installation of ground-mounted photovoltaic (PV) panels. The most commonly used foundation solution is metal piles - with various sections adapted to soil conditions and the structural model of the panels. However, direct foundation systems are also sometimes used, especially in brownfield sites. Although metal micropiles are generally the first design option, understanding and predicting their bearing capacity, particularly under varied soil conditions, remains an open research topic. CPT Method and Current Challenges: Metal piles are favored for PV panel foundations due to their adaptability, but existing design methods rely heavily on costly and time-consuming in situ tests. The Cone Penetration Test (CPT) offers a more efficient alternative by providing valuable data on soil strength, stratification, and other key characteristics with reduced resources. During the test, a cone-shaped probe is pushed into the ground at a constant rate. Sensors within the probe measure the resistance of the soil to penetration, divided into cone penetration resistance and shaft friction resistance. Despite some existing CPT-based design approaches for metal piles, these methods are often cumbersome and difficult to apply. They vary significantly due to soil type and foundation method, and traditional approaches like the LCPC method involve complex calculations and extensive empirical data. The method was developed by testing 197 piles on a wide range of ground conditions, but the tested piles were very different from the ones used for PV pile foundations, making the method less accurate and practical for steel micropiles. Project Objectives and Methodology: Our research aims to develop a calculation method for metal micropile foundations using CPT data, simplifying the complex relationships involved. The goal is to estimate the pullout bearing capacity of piles without additional laboratory tests, streamlining the design process. To achieve this, a case study was selected which will serve for the development of an 80ha solar power station. Four testing locations were chosen spread throughout the site. At each location, two types of steel profiles (H160 and C100) were embedded into the ground at various depths (1.5m and 2.0m). The piles were tested for pullout capacity under natural and inundated soil conditions. CPT tests conducted nearby served as calibration points. The results served for the development of a preliminary equation for estimating pullout capacity. Future Work: The next phase involves validating and refining the proposed equation on additional sites by comparing CPT-based forecasts with in situ pullout tests. This validation will enhance the accuracy and reliability of the method, potentially transforming the foundation design process for PV panels.Keywords: cone penetration test, foundation optimization, solar power stations, steel pile foundations
Procedia PDF Downloads 53124 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 111123 Role of Toll Like Receptor-2 in Female Genital Tuberculosis Disease Infection and Its Severity
Authors: Swati Gautam, Salman Akhtar, S. P. Jaiswar, Amita Jain
Abstract:
Background: FGTB is now a major global health problem mostly in developing countries including India. In humans, Mycobacterium Tuberculosis (M.tb) is a causating agent of infection. High index of suspicion is required for early diagnosis due to asymptomatic presentation of FGTB disease. In macrophages Toll Like Receptor-2 (TLR-2) is one which mediated host’s immune response to M.tb. The expression of TLR-2 on macrophages is important to determine the fate of innate immune responses to M.tb. TLR-2 have two work. First its high expression on macrophages worsen the outer of infection and another side, it maintains M.tb to its dormant stage avoids activation of M.tb from latent phase. Single Nucleotide Polymorphism (SNP) of TLR-2 gene plays an important role in susceptibility to TB among different populations and subsequently, in the development of infertility. Methodology: This Case-Control study was done in the Department of Obs and Gynae and Department of Microbiology at King George’s Medical University, U.P, Lucknow, India. Total 300 subjects (150 Cases and 150 Controls) were enrolled in the study. All subjects were enrolled only after fulfilling the given inclusion and exclusion criteria. Inclusion criteria: Age 20-35 years, menstrual-irregularities, positive on Acid-Fast Bacilli (AFB), TB-PCR, (LJ/MGIT) culture in Endometrial Aspiration (EA). Exclusion criteria: Koch’s active, on ATT, PCOS, and Endometriosis fibroid women, positive on Gonococal and Chlamydia. Blood samples were collected in EDTA tubes from cases and healthy control women (HCW) and genomic DNA extraction was carried out by salting-out method. Genotyping of TLR2 genetic variants (Arg753Gln and Arg677Trp) were performed by using single amplification refractory mutation system (ARMS) PCR technique. PCR products were analyzed by electrophoresis on 1.2% agarose gel and visualized by gel-doc. Statistical analysis of the data was performed using the SPSS 16.3 software and computing odds ratio (OR) with 95% CI. Linkage Disequiliribium (LD) analysis was done by SNP stats online software. Results: In TLR-2 (Arg753Gln) polymorphism significant risk of FGTB observed with GG homozygous mutant genotype (OR=13, CI=0.71-237.7, p=0.05), AG heterozygous mutant genotype (OR=13.7, CI=0.76-248.06, p=0.03) however, G allele (OR=1.09, CI=0.78-1.52, p=0.67) individually was not associated with FGTB. In TLR-2 (Arg677Trp) polymorphism a significant risk of FGTB observed with TT homozygous mutant genotype (OR= 0.020, CI=0.001-0.341, p < 0.001), CT heterozygous mutant genotype (OR=0.53, CI=0.33-0.86, p=0.014) and T allele (OR=0.463, CI=0.32-0.66, p < 0.001). TT mutant genotype was only found in FGTB cases and frequency of CT heterozygous more in control group as compared to FGTB group. So, CT genotype worked as protective mutation for FGTB susceptibility group. In haplotype analysis of TLR-2 genetic variants, four possible combinations, i.e. (G-T, A-C, G-C, and A-T) were obtained. The frequency of haplotype A-C was significantly higher in FGTB cases (0.32). Control group did not show A-C haplotype and only found in FGTB cases. Conclusion: In conclusion, study showed a significant association with both genetic variants of TLR-2 of FGTB disease. Moreover, the presence of specific associated genotype/alleles suggest the possibility of disease severity and clinical approach aimed to prevent extensive damage by disease and also helpful for early detection of disease.Keywords: ARMS, EDTA, FGTB, TLR
Procedia PDF Downloads 303122 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 472121 Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons
Authors: Saule Mussabekova
Abstract:
A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements.Keywords: analysis of elemental composition of hair, forensic medical research of hair, identification of unknown dead bodies, microelements
Procedia PDF Downloads 141120 A Review on Cyberchondria Based on Bibliometric Analysis
Authors: Xiaoqing Peng, Aijing Luo, Yang Chen
Abstract:
Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches
Procedia PDF Downloads 122119 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa
Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini
Abstract:
Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time
Procedia PDF Downloads 151118 Imaging Spectrum of Central Nervous System Tuberculosis on Magnetic Resonance Imaging: Correlation with Clinical and Microbiological Results
Authors: Vasundhara Arora, Anupam Jhobta, Suresh Thakur, Sanjiv Sharma
Abstract:
Aims and Objectives: Intracranial tuberculosis (TB) is one of the most devastating manifestations of TB and a challenging public health issue of considerable importance and magnitude world over. This study elaborates on the imaging spectrum of neurotuberculosis on magnetic resonance imaging (MRI) in 29 clinically suspected cases from a tertiary care hospital. Materials and Methods: The prospective hospital based evaluation of MR imaging features of neuro-tuberculosis in 29 clinically suspected cases was carried out in Department of Radio-diagnosis, Indira Gandhi Medical Hospital from July 2017 to August 2018. MR Images were obtained on a 1.5 T Magnetom Avanto machine and were analyzed to identify any abnormal meningeal enhancement or parenchymal lesions. Microbiological and Biochemical CSF analysis was performed in radio-logically suspected cases and the results were compared with the imaging data. Clinical follow up of the patients started on anti-tuberculous treatment was done to evaluate the response to treatment and clinical outcome. Results: Age range of patients in the study was between 1 year to 73 years. The mean age of presentation was 11.5 years. No significant difference in the distribution of cerebral tuberculosis was noted among the two genders. Imaging findings of neuro-tuberculosis obtained were varied and non specific ranging from lepto-meningeal enhancement, cerebritis to space occupying lesions such as tuberculomas and tubercular abscesses. Complications presenting as hydrocephalus (n= 7) and infarcts (n=9) was noted in few of these patients. 29 patients showed radiological suspicion of CNS tuberculosis with meningitis alone observed in 11 cases, tuberculomas alone were observed in 4 cases, meningitis with parenchymal tuberculomas in 11 cases. Tubercular abscess and cerebritis were observed in one case each. Tuberculous arachnoiditis was noted in one patient. Gene expert positivity was obtained in 11 out of 29 radiologically suspected patients; none of the patients showed culture positivity. Meningeal form of the disease alone showed higher positivity rate of gene Xpert (n=5) followed by combination of meningeal and parenchymal forms of disease (n=4). The parenchymal manifestation of disease alone showed least positivity rates (n= 3) with gene xpert testing. All 29 patients were started on anti tubercular treatment based on radiological suspicion of the disease with clinical improvement observed in 27 treated patients. Conclusions: In our study, higher incidence of neuro- tuberculosis was noted in paediatric population with predominance of the meningeal form of the disease. Gene Xpert positivity obtained was low due to paucibacillary nature of cerebrospinal fluid (CSF) with even lower positivity of CSF samples in parenchymal form of the manifestation. MRI showed high accuracy in detecting CNS lesions in neuro-tuberculosis. Hence, it can be concluded that MRI plays a crucial role in the diagnosis because of its inherent sensitivity and specificity and is an indispensible imaging modality. It caters to the need of early diagnosis owing to poor sensitivity of microbiological tests more so in the parenchymal manifestation of the disease.Keywords: neurotuberculosis, tubercular abscess, tuberculoma, tuberculous meningitis
Procedia PDF Downloads 169117 Implementation of Cord- Blood Derived Stem Cells in the Regeneration of Two Experimental Models: Carbon Tetrachloride and S. Mansoni Induced Liver Fibrosis
Authors: Manal M. Kame, Zeinab A. Demerdash, Hanan G. El-Baz, Salwa M. Hassan, Faten M. Salah, Wafaa Mansour, Olfat Hammam
Abstract:
Cord blood (CB) derived Unrestricted Somatic Stem Cells (USSCs) with their multipotentiality hold great promise in liver regeneration. This work aims at evaluation of the therapeutic potentiality of USSCs in two experimental models of chronic liver injury induced either by S. mansoni infection in balb/c mice or CCL4 injection in hamsters. Isolation, propagation, and characterization of USSCs from CB samples were performed. USSCs were induced to differentiate into osteoblasts, adipocytes and hepatocyte-like cells. Cells of the third passage were transplanted in two models of liver fibrosis: (1) Twenty hamsters were induced to liver fibrosis by repeated i. p. injection of 100 μl CCl4 /hamster for 8 weeks. This model was designed as; 10 hamsters with liver fibrosis and treated with i.h. injection of 3x106 USSCs (USSCs transplanted group), 10 hamsters with liver fibrosis (pathological control group), and 10 hamsters with healthy livers (normal control group). (2) Murine chronics S.mansoni model: twenty mice were induced to liver fibrosis with S. mansoni ceracariae (60 cercariae/ mouse) using the tail immersion method and left for 12 weeks. This model was designed as; 10 mice with liver fibrosis were transplanted with i. v. injection of 1×106 USCCs (USSCs transplanted group). Other 2 groups were designed as in hamsters model. Animals were sacrificed 12 weeks after USSCs transplantation, and their liver sections were examined for detection of human hepatocyte-like cells by immunohistochemistry staining. Moreover, liver sections were examined for fibrosis level, and fibrotic indices were calculated. Sera of sacrificed animals were tested for liver functions. CB USSCs, with fibroblast-like morphology, expressed high levels of CD44, CD90, CD73 and CD105 and were negative for CD34, CD45, and HLA-DR. USSCs showed high expression of transcripts for Oct4 and Sox2 and were in vitro differentiated into osteoblasts, adipocytes. In both animal models, in vitro induced hepatocyte-like cells were confirmed by cytoplasmic expression of glycogen, alpha-fetoprotein, and cytokeratin18. Livers of USSCs transplanted group showed engraftment with human hepatocyte-like cells as proved by cytoplasmic expression of human alpha-fetoprotein, cytokeratin18, and OV6. In addition, livers of this group showed less fibrosis than the pathological control group. Liver functions in the form of serum AST & ALT level and serum total bilirubin level were significantly lowered in USSCs transplanted group than pathological control group (p < 0.001). Moreover, the fibrotic index was significantly lower (p< 0.001) in USSCs transplanted group than pathological control group. In addition liver sections, of i. v. injection of 1×106 USCCs of mice, stained with either H&E or sirius red showed diminished granuloma size and a relative decrease in hepatic fibrosis. Our experimental liver fibrosis models transplanted with CB-USSCs showed liver engraftment with human hepatocyte-like cells as well as signs of liver regeneration in the form of improvement in liver function assays and fibrosis level. These data provide hope that human CB- derived USSCs are introduced as multipotent stem cells with great potentiality in regenerative medicine & strengthens the concept of cellular therapy for the treatment of liver fibrosis.Keywords: cord blood, liver fibrosis, stem cells, transplantation
Procedia PDF Downloads 308116 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents
Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady
Abstract:
Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.Keywords: adolescents, MRI classification, pars interticularis, treatment protocol
Procedia PDF Downloads 152115 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 168114 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems
Authors: Sabina Auhunas
Abstract:
These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.Keywords: content analysis, COVID-19, scientometrics, text mining
Procedia PDF Downloads 112113 Flexible Ethylene-Propylene Copolymer Nanofibers Decorated with Ag Nanoparticles as Effective 3D Surface-Enhanced Raman Scattering Substrates
Authors: Yi Li, Rui Lu, Lianjun Wang
Abstract:
With the rapid development of chemical industry, the consumption of volatile organic compounds (VOCs) has increased extensively. In the process of VOCs production and application, plenty of them have been transferred to environment. As a result, it has led to pollution problems not only in soil and ground water but also to human beings. Thus, it is important to develop a sensitive and cost-effective analytical method for trace VOCs detection in environment. Surface-enhanced Raman Spectroscopy (SERS), as one of the most sensitive optical analytical technique with rapid response, pinpoint accuracy and noninvasive detection, has been widely used for ultratrace analysis. Based on the plasmon resonance on the nanoscale metallic surface, SERS technology can even detect single molecule due to abundant nanogaps (i.e. 'hot spots') on the nanosubstrate. In this work, a self-supported flexible silver nitrate (AgNO3)/ethylene-propylene copolymer (EPM) hybrid nanofibers was fabricated by electrospinning. After an in-situ chemical reduction using ice-cold sodium borohydride as reduction agent, numerous silver nanoparticles were formed on the nanofiber surface. By adjusting the reduction time and AgNO3 content, the morphology and dimension of silver nanoparticles could be controlled. According to the principles of solid-phase extraction, the hydrophobic substance is more likely to partition into the hydrophobic EPM membrane in an aqueous environment while water and other polar components are excluded from the analytes. By the enrichment of EPM fibers, the number of hydrophobic molecules located on the 'hot spots' generated from criss-crossed nanofibers is greatly increased, which further enhances SERS signal intensity. The as-prepared Ag/EPM hybrid nanofibers were first employed to detect common SERS probe molecule (p-aminothiophenol) with the detection limit down to 10-12 M, which demonstrated an excellent SERS performance. To further study the application of the fabricated substrate for monitoring hydrophobic substance in water, several typical VOCs, such as benzene, toluene and p-xylene, were selected as model compounds. The results showed that the characteristic peaks of these target analytes in the mixed aqueous solution could be distinguished even at a concentration of 10-6 M after multi-peaks gaussian fitting process, including C-H bending (850 cm-1), C-C ring stretching (1581 cm-1, 1600 cm-1) of benzene, C-H bending (844 cm-1 ,1151 cm-1), C-C ring stretching (1001 cm-1), CH3 bending vibration (1377 cm-1) of toluene, C-H bending (829 cm-1), C-C stretching (1614 cm-1) of p-xylene. The SERS substrate has remarkable advantages which combine the enrichment capacity from EPM and the Raman enhancement of Ag nanoparticles. Meanwhile, the huge specific surface area resulted from electrospinning is benificial to increase the number of adsoption sites and promotes 'hot spots' formation. In summary, this work provides powerful potential in rapid, on-site and accurate detection of trace VOCs using a portable Raman.Keywords: electrospinning, ethylene-propylene copolymer, silver nanoparticles, SERS, VOCs
Procedia PDF Downloads 159112 The Impact of ChatGPT on the Healthcare Domain: Perspectives from Healthcare Majors
Authors: Su Yen Chen
Abstract:
ChatGPT has shown both strengths and limitations in clinical, educational, and research settings, raising important concerns about accuracy, transparency, and ethical use. Despite an improved understanding of user acceptance and satisfaction, there is still a gap in how general AI perceptions translate into practical applications within healthcare. This study focuses on examining the perceptions of ChatGPT's impact among 266 healthcare majors in Taiwan, exploring its implications for their career development, as well as its utility in clinical practice, medical education, and research. By employing a structured survey with precisely defined subscales, this research aims to probe the breadth of ChatGPT's applications within healthcare, assessing both the perceived benefits and the challenges it presents. Additionally, to further enhance the comprehensiveness of our methodology, we have incorporated qualitative data collection methods, which provide complementary insights to the quantitative findings. The findings from the survey reveal that perceptions and usage of ChatGPT among healthcare majors vary significantly, influenced by factors such as its perceived utility, risk, novelty, and trustworthiness. Graduate students and those who perceive ChatGPT as more beneficial and less risky are particularly inclined to use it more frequently. This increased usage is closely linked to significant impacts on personal career development. Furthermore, ChatGPT's perceived usefulness and novelty contribute to its broader impact within the healthcare domain, suggesting that both innovation and practical utility are key drivers of acceptance and perceived effectiveness in professional healthcare settings. Trust emerges as an important factor, especially in clinical settings where the stakes are high. The trust that healthcare professionals place in ChatGPT significantly affects its integration into clinical practice and influences outcomes in medical education and research. The reliability and practical value of ChatGPT are thus critical for its successful adoption in these areas. However, an interesting paradox arises with regard to the ease of use. While making ChatGPT more user-friendly is generally seen as beneficial, it also raises concerns among users who have lower levels of trust and perceive higher risks associated with its use. This complex interplay between ease of use and safety concerns necessitates a careful balance, highlighting the need for robust security measures and clear, transparent communication about how AI systems work and their limitations. The study suggests several strategic approaches to enhance the adoption and integration of AI in healthcare. These include targeted training programs for healthcare professionals to increase familiarity with AI technologies, reduce perceived risks, and build trust. Ensuring transparency and conducting rigorous testing are also vital to foster trust and reliability. Moreover, comprehensive policy frameworks are needed to guide the implementation of AI technologies, ensuring high standards of patient safety, privacy, and ethical use. These measures are crucial for fostering broader acceptance of AI in healthcare, as the study contributes to enriching the discourse on AI's role by detailing how various factors affect its adoption and impact.Keywords: ChatGPT, healthcare, survey study, IT adoption, behaviour, applcation, concerns
Procedia PDF Downloads 27111 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 106110 A Rare Case of Dissection of Cervical Portion of Internal Carotid Artery, Diagnosed Postpartum
Authors: Bidisha Chatterjee, Sonal Grover, Rekha Gurung
Abstract:
Postpartum dissection of the internal carotid artery is a relatively rare condition and is considered as an underlying aetiology in 5% to 25% of strokes under the age of 30 to 45 years. However, 86% of these cases recover completely and 14% have mild focal neurological symptoms. Prognosis is generally good with early intervention. The risk quoted for a repeat carotid artery dissection in subsequent pregnancies is less than 2%. 36-year Caucasian primipara presented on postnatal day one of forceps delivery with tachycardia. In the intrapartum period she had a history of prolonged rupture of membranes and developed intrapartum sepsis and was treated with antibiotics. Postpartum ECG showed septal inferior T wave inversion and a troponin level of 19. Subsequently Echocardiogram ruled out post-partum cardiomyopathy. Repeat ECG showed improvement of the previous changes and in the absence of symptoms no intervention was warranted. On day 4 post-delivery, she had developed symptoms of droopy right eyelid, pain around the right eye and itching in the right ear. On examination, she had developed right sided ptosis, unequal pupils (Rt miotic pupil). Cranial nerve examination, reflexes, sensory examination and muscle power was normal. Apart from migraine, there was no medical or family history of note. In view of Horner’s on the right, she had a CT Angiogram and subsequently MR/MRA and was diagnosed with dissection of the cervical portion of the right internal carotid artery. She was discharged on a course of Aspirin 75mg. By 6 week post-natal follow up patient had recovered significantly with occasional episodes of unequal pupils and tingling of right toes which resolved spontaneously. Cervical artery dissection, including VAD and carotid artery dissection, are rare complications of pregnancy with an estimated annual incidence of 2.6–3 per 100,000 pregnancy hospitalizations. Aetiology remains unclear though trauma during straining at labour, underlying arterial disease and preeclampsia have been implicated. Hypercoagulable state during pregnancy and puerperium could also be an important factor. 60-90% cases present with severe headache and neck pain and generally precede neurological symptoms like ipsilateral Horner’s syndrome, retroorbital pain, tinnitus and cranial nerve palsy. Although rare, the consequences of delayed diagnosis and management can lead to severe and permanent neurological deficits. Patients with a strong index of suspicion should undergo an MRI or MRA of head and neck. Antithrombotic and antiplatelet therapy forms the mainstay of therapy with selected cases needing endovascular stenting. Long term prognosis is favourable with either complete resolution or minimal deficit if treatment is prompt. Patients should be counselled about the recurrence risk and possibility of stroke in future pregnancy. Coronary artery dissection is rare and treatable but needs early diagnosis and treatment. Post-partum headache and neck pain with neurological symptoms should prompt urgent imaging followed by antithrombotic and /or antiplatelet therapy. Most cases resolve completely or with minimal sequelae.Keywords: postpartum, dissection of internal carotid artery, magnetic resonance angiogram, magnetic resonance imaging, antiplatelet, antithrombotic
Procedia PDF Downloads 95109 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 63108 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 74