Search results for: producer's classification accuracy
119 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms
Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan
Abstract:
Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity
Procedia PDF Downloads 255118 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research
Authors: Edvard P. G. Bruun
Abstract:
One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research
Procedia PDF Downloads 235117 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 57116 Performance Parameters of an Abbreviated Breast MRI Protocol
Authors: Andy Ho
Abstract:
Breast cancer is a common cancer in Australia. Early diagnosis is crucial for improving patient outcomes, as later-stage detection correlates with poorer prognoses. While multiparametric MRI offers superior sensitivity in detecting invasive and high-grade breast cancers compared to conventional mammography, its extended scan duration and high costs limit widespread application. As a result, full protocol MRI screening is typically reserved for patients at elevated risk. Recent advancements in imaging technology have facilitated the development of Abbreviated MRI protocols, which dramatically reduce scan times (<10 minutes compared to >30 minutes for full protocol). The potential for Abbreviated MRI to offer a more time- and cost-efficient alternative has implications for improving patient accessibility, reducing appointment durations, and enhancing compliance—especially relevant for individuals requiring regular annual screening over several decades. The purpose of this study is to assess the diagnostic efficacy of Abbreviated MRI for breast cancer screening among high-risk patients at the Royal Prince Alfred Hospital (RPA). This study aims to determine the sensitivity, specificity, and inter-reader variability of Abbreviated MRI protocols when interpreted by subspecialty-trained Breast Radiologists. A systematic review of the RPA’s electronic Picture Archive and Communication System identified high-risk patients, defined by Australian ‘Medicare Benefits Schedule’ criteria, who underwent Breast MRI from 2021 to 2022. Eligible participants included asymptomatic patients under 50 years old and referred by the High-Risk Clinic due to a high-risk genetic profile or relevant familial history. The MRIs were anonymized, randomized, and interpreted by four Breast Radiologists, each independently completing standardized proforma evaluations. Radiological findings were compared against histopathology as the gold standard or follow-up imaging if biopsies were unavailable. Statistical metrics, including sensitivity, specificity, and inter-reader variability, were assessed. The Fleiss-Kappa analysis demonstrated a fair inter-reader agreement (kappa = 0.25; 95% CI: 0.19–0.32; p < 0.0001). The sensitivity for detecting malignancies was 0.72, with a specificity of 0.92. For benign lesions, sensitivity and specificity were 0.844 and 0.73, respectively. These findings underline the potential of Abbreviated MRI as a reliable screening tool for malignancies with significant specificity, though reduced sensitivity highlights the importance of robust radiologist training and consistent evaluation standards. Abbreviated MRI protocols exhibit promise as a viable screening option for high-risk patients, combining reduced scan times and acceptable diagnostic accuracy. Further work to refine interpretation practices and optimize training is essential to maximize the protocol’s utility in routine clinical screening and facilitate broader accessibility.Keywords: abbreviated, breast, cancer, MRI
Procedia PDF Downloads 6115 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 217114 Association between Physical Inactivity and Sedentary Behaviours with Risk of Hypertension among Sedentary Occupation Workers: A Cross-Sectional Study
Authors: Hanan Badr, Fahad Manee, Rao Shashidhar, Omar Bayoumy
Abstract:
Introduction: Hypertension is the major risk factor for cardiovascular diseases and stroke and a universe leading cause of disability-adjusted life years and mortality. Adopting an unhealthy lifestyle is thought to be associated with developing hypertension regardless of predisposing genetic factors. This study aimed to examine the association between recreational physical activity (RPA), and sedentary behaviors with a risk of hypertension among ministry employees, where there is no role for occupational physical activity (PA), and to scrutinize participants’ time spent in RPA and sedentary behaviors on the working and weekend days. Methods: A cross-sectional study was conducted among randomly selected 2562 employees working at ten randomly selected ministries in Kuwait. To have a representative sample, the proportional allocation technique was used to define the number of participants in each ministry. A self-administered questionnaire was used to collect data about participants' socio-demographic characteristics, health status, and their 24 hours’ time use during a regular working day and a weekend day. The time use covered a list of 20 different activities practiced by a person daily. The New Zealand Physical Activity Questionnaire-Short Form (NZPAQ-SF) was used to assess the level of RPA. The scale generates three categories according to the number of hours spent in RPA/week: relatively inactive, relatively active, and highly active. Gender-matched trained nurses performed anthropometric measurements (weight and height) and measuring blood pressure (two readings) using an automatic blood pressure monitor (95% accuracy level compared to a calibrated mercury sphygmomanometer). Results: Participants’ mean age was 35.3±8.4 years, with almost equal gender distribution. About 13% of the participants were smokers, and 75% were overweight. Almost 10% reported doctor-diagnosed hypertension. Among those who did not, the mean systolic blood pressure was 119.9±14.2 and the mean diastolic blood pressure was 80.9±7.3. Moreover, 73.9% of participants were relatively physically inactive and 18% were highly active. Mean systolic and diastolic blood pressure showed a significant inverse association with the level of RPA (means of blood pressure measures were: 123.3/82.8 among relatively inactive, 119.7/80.4 among relatively active, and 116.6/79.6 among highly active). Furthermore, RPA occupied 1.6% and 1.8% of working and weekend days, respectively, while sedentary behaviors (watching TV, using electronics for social media or entertaining, etc.) occupied 11.2% and 13.1%, respectively. Sedentary behaviors were significantly associated with high levels of systolic and diastolic blood pressure. Binary logistic regression revealed that physical inactivity (OR=3.13, 95% CI: 2.25-4.35) and sedentary behaviors (OR=2.25, CI: 1.45-3.17) were independent risk factors for high systolic and diastolic blood pressure after adjustment for other covariates. Conclusions: Physical inactivity and sedentary lifestyle were associated with a high risk of hypertension. Further research to examine the independent role of RPA in improving blood pressure levels and cultural and occupational barriers for practicing RPA are recommended. Policies should be enacted in promoting PA in the workplace that might help in decreasing the risk of hypertension among sedentary occupation workers.Keywords: physical activity, sedentary behaviors, hypertension, workplace
Procedia PDF Downloads 177113 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)
Authors: Kutangila Malundama Succes, Koita Mahamadou
Abstract:
In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation
Procedia PDF Downloads 63112 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 324111 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 391110 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal
Authors: C. Bateira, J. Fernandes, A. Costa
Abstract:
The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards
Procedia PDF Downloads 176109 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 239108 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field
Authors: Jeronimo Cox, Tomonari Furukawa
Abstract:
Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.Keywords: motion tracking, sensor fusion, magnetometer, state estimation
Procedia PDF Downloads 84107 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 129106 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network
Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu
Abstract:
Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning
Procedia PDF Downloads 129105 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years
Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding
Procedia PDF Downloads 173104 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 88103 The Effects of Aging on Visuomotor Behaviors in Reaching
Authors: Mengjiao Fan, Thomson W. L. Wong
Abstract:
It is unavoidable that older adults may have to deal with aging-related motor problems. Aging is highly likely to affect motor learning and control as well. For example, older adults may suffer from poor motor function and quality of life due to age-related eye changes. These adverse changes in vision results in impairment of movement automaticity. Reaching is a fundamental component of various complex movements, which is therefore beneficial to explore the changes and adaptation in visuomotor behaviors. The current study aims to explore how aging affects visuomotor behaviors by comparing motor performance and gaze behaviors between two age groups (i.e., young and older adults). Visuomotor behaviors in reaching under providing or blocking online visual feedback (simulated visual deficiency) conditions were investigated in 60 healthy young adults (Mean age=24.49 years, SD=2.12) and 37 older adults (Mean age=70.07 years, SD=2.37) with normal or corrected-to-normal vision. Participants in each group were randomly allocated into two subgroups. Subgroup 1 was provided with online visual feedback of the hand-controlled mouse cursor. However, in subgroup 2, visual feedback was blocked to simulate visual deficiency. The experimental task required participants to complete 20 times of reaching to a target by controlling the mouse cursor on the computer screen. Among all the 20 trials, start position was upright in the center of the screen and target appeared at a randomly selected position by the tailor-made computer program. Primary outcomes of motor performance and gaze behaviours data were recorded by the EyeLink II (SR Research, Canada). The results suggested that aging seems to affect the performance of reaching tasks significantly in both visual feedback conditions. In both age groups, blocking online visual feedback of the cursor in reaching resulted in longer hand movement time (p < .001), longer reaching distance away from the target center (p<.001) and poorer reaching motor accuracy (p < .001). Concerning gaze behaviors, blocking online visual feedback increased the first fixation duration time in young adults (p<.001) but decreased it in older adults (p < .001). Besides, under the condition of providing online visual feedback of the cursor, older adults conducted a longer fixation dwell time on target throughout reaching than the young adults (p < .001) although the effect was not significant under blocking online visual feedback condition (p=.215). Therefore, the results suggested that different levels of visual feedback during movement execution can affect gaze behaviors differently in older and young adults. Differential effects by aging on visuomotor behaviors appear on two visual feedback patterns (i.e., blocking or providing online visual feedback of hand-controlled cursor in reaching). Several specific gaze behaviors among the older adults were found, which imply that blocking of visual feedback may act as a stimulus to seduce extra perceptive load in movement execution and age-related visual degeneration might further deteriorate the situation. It indeed provides us with insight for the future development of potential rehabilitative training method (e.g., well-designed errorless training) in enhancing visuomotor adaptation for our aging population in the context of improving their movement automaticity by facilitating their compensation of visual degeneration.Keywords: aging effect, movement automaticity, reaching, visuomotor behaviors, visual degeneration
Procedia PDF Downloads 311102 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas
Authors: A. Odoom, A. Salama, H. Ibrahim
Abstract:
Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model
Procedia PDF Downloads 140101 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear
Authors: Grant Bifolchi
Abstract:
As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology
Procedia PDF Downloads 94100 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls
Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac
Abstract:
No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations
Procedia PDF Downloads 31799 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 8498 ADAM10 as a Potential Blood Biomarker of Cognitive Frailty
Authors: Izabela P. Vatanabe, Rafaela Peron, Patricia Manzine, Marcia R. Cominetti
Abstract:
Introduction: Considering the increase in life expectancy of world population, there is an emerging concern in health services to allocate better care and care to elderly, through promotion, prevention and treatment of health. It has been observed that frailty syndrome is prevalent in elderly people worldwide and this complex and heterogeneous clinical syndrome consist of the presence of physical frailty associated with cognitive dysfunction, though in absence of dementia. This can be characterized by exhaustion, unintentional weight loss, decreased walking speed, weakness and low level of physical activity, in addition, each of these symptoms may be a predictor of adverse outcomes such as hospitalization, falls, functional decline, institutionalization, and death. Cognitive frailty is a recent concept in literature, which is defined as the presence of physical frailty associated with mild cognitive impairment (MCI) however in absence of dementia. This new concept has been considered as a subtype of frailty, which along with aging process and its interaction with physical frailty, accelerates functional declines and can result in poor quality of life of the elderly. MCI represents a risk factor for Alzheimer's disease (AD) in view of high conversion rate for this disease. Comorbidities and physical frailty are frequently found in AD patients and are closely related to heterogeneity and clinical manifestations of the disease. The decreased platelets ADAM10 levels in AD patients, compared to cognitively healthy subjects, matched by sex, age and education. Objective: Based on these previous results, this study aims to evaluate whether ADAM10 platelet levels of could act as a biomarker of cognitive frailty. Methods: The study was approved by Ethics Committee of Federal University of São Carlos (UFSCar) and conducted in the municipality of São Carlos, headquarters of Federal University of São Carlos (UFSCar). Biological samples of subjects were collected, analyzed and then stored in a biorepository. ADAM10 platelet levels were analyzed by western blotting technique in subjects with MCI and compared to subjects without cognitive impairment, both with and without presence of frailty. Statistical tests of association, regression and diagnostic accuracy were performed. Results: The results have shown that ADAM10/β-actin ratio is decreased in elderly individuals with cognitive frailty compared to non-frail and cognitively healthy controls. Previous studies performed by this research group, already mentioned above, demonstrated that this reduction is still higher in AD patients. Therefore, the ADAM10/β-actin ratio appears to be a potential biomarker for cognitive frailty. The results bring important contributions to an accurate diagnosis of cognitive frailty from the perspective of ADAM10 as a biomarker for this condition, however, more experiments are being conducted, using a high number of subjects, and will help to understand the role of ADAM10 as biomarker of cognitive frailty and contribute to the implementation of tools that work in the diagnosis of cognitive frailty. Such tools can be used in public policies for the diagnosis of cognitive frailty in the elderly, resulting in a more adequate planning for health teams and better quality of life for the elderly.Keywords: ADAM10, biomarkers, cognitive frailty, elderly
Procedia PDF Downloads 23497 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak
Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi
Abstract:
This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak
Procedia PDF Downloads 15296 Understanding Awareness, Agency and Autonomy of Mothers and Potential of Digital Technology in Expanding Maternal Health Information Access: A Survey of Mothers in Urban India
Authors: Sumiti Saharan, Pallav Patankar, Lily W. Lee
Abstract:
Understanding the health-seeking behaviors and attitudes of women towards maternal health in the context of gender roles and family dynamics is tremendously crucial for designing effective and impactful interventions aimed at improving maternal and child health outcomes. Further, as the digital world becomes more accessible and affordable, it is imperative to scope the potential of digital technology in enabling access to maternal health information in different socio-economic groups (SEGs). In the summer of 2017, we conducted a study with 500 women across different SEGs in urban India who were pregnant or had had a delivery in the last year. The study was undertaken to assess their maternal health information seeking behavior with a particular focus on probing their use of digital technology for health-related information. The study also measured women's decision-making autonomy in the context of maternal health, awareness of their rights to quality and respectful maternal healthcare, and agency to voice their rights. We probed the impact of key variables including education, age, and socioeconomic status on all outcome variables. In terms of health-seeking behaviors, we found that women heavily relied on medical professionals and/or their mothers and mothers-in-law for all maternal health advice. Digital adoption was found to be high across all SEGs, with around 70% of women from all populations using the internet several times a week. On the other hand, use of the internet for both accessing maternal health information and choosing maternity hospitals were both significantly dependent on SEG. The key reasons reported for not using the internet for health purposes were lack of awareness and lack of trust on content accuracy. Decisions around health practices and type of delivery were found to be jointly made by women and other family members. Almost all women reported their husbands to play a key role in all maternal health decisions and for decisions with a clear financial implication like choice of hospital for delivery, husbands were reported to be the sole decision maker by a majority of women. The agency of women was also found to be low in interactions with maternal healthcare providers with a third of respondents not comfortable with voicing their opinions and preferences to their doctors. Interestingly, we find that this relatively low agency was prominent in both lower middle class and middle-class SEGs. Recognition of the sociocultural determinants of behavior is the first step in developing actionable strategies for improving maternal health outcomes. Our study quantifies the agency and autonomy of women in urban India and the variables that impact them. Our findings emphasize the value of gender normative approaches that factor in the key role husbands play in guiding maternal health decisions. They also highlight the power of digital approaches for catalyzing access to maternal health information. These insights into the attitude and behaviors of mothers in context of their sociocultural environments—and their relationship with digital technology—can help pave the way towards designing effective, scalable maternal and child health programs in developing nations like India.Keywords: access to healthcare information, behavior, digital health, maternal health
Procedia PDF Downloads 13695 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis
Authors: William Ho, Agus Wicaksana
Abstract:
Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review
Procedia PDF Downloads 7394 Adaptive Power Control of the City Bus Integrated Photovoltaic System
Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker
Abstract:
This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter
Procedia PDF Downloads 21093 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 15992 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging
Authors: Abdou Elhendy
Abstract:
Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis
Procedia PDF Downloads 16791 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 10290 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory
Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang
Abstract:
As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation
Procedia PDF Downloads 143