Search results for: predicting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1077

Search results for: predicting

267 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 74
266 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 94
265 Rate of Force Development, Net Impulse and Modified Reactive Strength as Predictors of Volleyball Spike Jump Height among Young Elite Players

Authors: Javad Sarvestan, Zdenek Svoboda

Abstract:

Force-time (F-T) curvature characteristics are globally referenced as the main indicators of athletic jump performance. Nevertheless, to the best of authors’ knowledge, no investigation tried to deeply study the relationship between F-T curve variables and real-game jump performance among elite volleyball players. To this end, this study was designated to investigate the association between F-T curve variables, including movement timings, force, velocity, power, rate of force development (RFD), modified reactive strength index (RSImod), and net impulse with spike jump height during real-game circumstances. Twelve young elite volleyball players performed 3 countermovement jump (CMJ) and 3 spike jump in real-game circumstances with 1-minute rest intervals to prevent fatigue. Shapiro-Wilk statistical test illustrated the normality of data distribution, and Pearson’s product correlation test portrayed a significant correlation between CMJ height and peak RFD (0.85), average RFD (r=0.81), RSImod (r=0.88) and concentric net impulse (r=0.98), and also significant correlation between spike jump height and peak RFD (0.73), average RFD (r=0.80), RSImod (r=0.62) and concentric net impulse (r=0.71). Multiple regression analysis also reported that these factors have a strong contribution in predicting of CMJ (98%) and spike jump (77%) heights. Outcomes of this study confirm that the RFD, concentric net impulse, and RSImod values could precisely monitor and track the volleyball attackers’ explosive strength, muscular stretch-shortening cycle function efficiency, and ultimate spike jump height. To this effect, volleyball coaches and trainers are advised to have an in-depth focus on their athletes’ progression or the impacts of strength trainings by observing and chasing the F-T curve variables such as RFD, net impulse, and RSImod.

Keywords: net impulse, reactive strength index, rate of force development, stretch-shortening cycle

Procedia PDF Downloads 116
264 Demographic Profile, Risk Factors and In-hospital Outcomes of Acute Coronary Syndrome (ACS) in Young Population, in Pakistan-Single Center Real World Experience

Authors: Asma Qudrat, Abid Ullah, Rafi Ullah, Ali Raza, Shah Zeb, Syed Ali Shan Ul-Haq, Shahkar Ahmed Shah, Attiya Hameed Khan, Saad Zaheer, Umama Qasim, Kiran Jamal, Zahoor khan

Abstract:

Objectives: Coronary artery disease (CAD) is the major public health issue associated with high mortality and morbidity rate worldwide. Young patients with ACS have unique characteristics with different demographic profiles and risk factors. The precise diagnosis and early risk stratification is important in guiding treatment and predicting the prognosis of young patients with ACS. To evaluate the associated demographics, risk factors, and outcomes profile of ACS in young age patients. Methods: The research follow a retrospective design, the single centre study of patients diagnosis with the first event of ACS in young age (>18 and <40) were included. Data collection included demographic profiles, risk factors, and in-hospital outcomes of young ACS patients. The patient’s data was retrieved through Electronic Medical Records (EMR) of Peshawar Institute of Cardiology (PIC), and all characteristic were assessed. Results: In this study, 77% were male, and 23% were female patients. The risk factors were assessed with CAD and shown significant results (P < 0.01). The most common presentation was STEMI, with (45%) most in ACS young patients. The angiographic pattern showed single vessel disease (SVD) in 49%, double vessel disease (DVD) in 17% and triple vessel disease (TVD) was found in 10%, and Left Artery Disease (LAD) (54%) was present to be the most common involved artery. Conclusion: It is concluded that the male sex was predominant in ACS young age patients. SVD was the common coronary angiographic finding. Risk factors showed significant results towards CAD and common presentations.

Keywords: coronary artery disease, Non-ST elevation myocardial infarction, ST elevation myocardial infarction, unstable angina, acute coronary syndrome

Procedia PDF Downloads 132
263 Sustainable Happiness of Thai People: Monitoring the Thai Happiness Index

Authors: Kalayanee Senasu

Abstract:

This research investigates the influences of different factors on the happiness of Thai people, including both general factors and sustainable ones. Additionally, this study also monitors Thai people’s happiness via Thai Happiness Index developed in 2017. Besides reflecting happiness level of Thai people, this index also identifies related important issues. The data were collected by both secondary related data and primary survey data collected by interviewed questionnaires. The research data were from stratified multi-stage sampling in region, province, district, and enumeration area, and simple random sampling in each enumeration area. The research data cover 20 provinces, including Bangkok and 4-5 provinces in each region of the North, Northeastern, Central, and South. There were 4,960 usable respondents who were at least 15 years old. Statistical analyses included both descriptive and inferential statistics, including hierarchical regression and one-way ANOVA. The Alkire and Foster method was adopted to develop and calculate the Thai happiness index. The results reveal that the quality of household economy plays the most important role in predicting happiness. The results also indicate that quality of family, quality of health, and effectiveness of public administration in the provincial level have positive effects on happiness at about similar levels. For the socio-economic factors, the results reveal that age, education level, and household revenue have significant effects on happiness. For computing Thai happiness index (THaI), the result reveals the 2018 THaI value is 0.556. When people are divided into four groups depending upon their degree of happiness, it is found that a total of 21.1% of population are happy, with 6.0% called deeply happy and 15.1% called extensively happy. A total of 78.9% of population are not-yet-happy, with 31.8% called narrowly happy, and 47.1% called unhappy. A group of happy population reflects the happiness index THaI valued of 0.789, which is much higher than the THaI valued of 0.494 of the not-yet-happy population. Overall Thai people have higher happiness compared to 2017 when the happiness index was 0.506.

Keywords: happiness, quality of life, sustainability, Thai Happiness Index

Procedia PDF Downloads 140
262 Evaluation of Flexural Cracking Width of Steel Fibre Reinforced Concrete Beams

Authors: Touhami Tahenni

Abstract:

Excessively wide cracks are harmful to the serviceability of reinforced concrete (RC) beams and may lead to durability problems in the longer term. They also reduce the rigidity of RC sections, rendering the tensile concrete ineffective structurally. To reduce the negative effects of cracks, steel fibers are added to concrete mixes in the same manner as aggregates. In the present work, steel fibers reinforced concrete (SFRC) beams, made of normal strength and high strength concretes, were tested in a four-point bending test using a digital image correlation technique. The beams had different volume fractions of fibres and different aspect ratios (fiber length/fiber diameter). The evaluation of flexural cracking widths was determined using Gom-Aramis software. The experimental crack widths were compared with theoretical values predicted by the technical document of Rilem TC 162-TDF. The model proposed in this document seems to be the only one that considers the efficiency of steel fibres in restraining the crack widths. However, the model of Rilem takes into account only the aspect ratio of steel fibres to predict the crack width of SFRC beams. It has been reported in several pieces of research that the contribution of steel fibres to the limitation of flexural cracking widths is based on three essential parameters namely, the volume fraction, the orientation and the aspect ratio of fibres. Referring to the literature on the flexural cracking behavior of SFRC beams and the experimental observations of the present work, a correction of the Rilem model by the introduction of these parameters in the formula is proposed. The crack widths predicted by the new empirical model were compared with the experimental results and assessed against other test data on SFRC beams taken from the literature. The modified Rilem model gives better results and is found more satisfactory in predicting the crack widths of fibres concrete.

Keywords: stee fibres, reinforced concrete, flexural cracking, tensile strength, crack width

Procedia PDF Downloads 62
261 Prediction of the Dark Matter Distribution and Fraction in Individual Galaxies Based Solely on Their Rotation Curves

Authors: Ramzi Suleiman

Abstract:

Recently, the author proposed an observationally-based relativity theory termed information relativity theory (IRT). The theory is simple and is based only on basic principles, with no prior axioms and no free parameters. For the case of a body of mass in uniform rectilinear motion relative to an observer, the theory transformations uncovered a matter-dark matter duality, which prescribes that the sum of the densities of the body's baryonic matter and dark matter, as measured by the observer, is equal to the body's matter density at rest. It was shown that the theory transformations were successful in predicting several important phenomena in small particle physics, quantum physics, and cosmology. This paper extends the theory transformations to the cases of rotating disks and spheres. The resulting transformations for a rotating disk are utilized to derive predictions of the radial distributions of matter and dark matter densities in rotationally supported galaxies based solely on their observed rotation curves. It is also shown that for galaxies with flattening curves, good approximations of the radial distributions of matter and dark matter and of the dark matter fraction could be obtained from one measurable scale radius. Test of the model on five galaxies, chosen randomly from the SPARC database, yielded impressive predictions. The rotation curves of all the investigated galaxies emerged as accurate traces of the predicted radial density distributions of their dark matter. This striking result raises an intriguing physical explanation of gravity in galaxies, according to which it is the proximal drag of the stars and gas in the galaxy by its rotating dark matter web. We conclude by alluding briefly to the application of the proposed model to stellar systems and black holes. This study also hints at the potential of the discovered matter-dark matter duality in fixing the standard model of elementary particles in a natural manner without the need for hypothesizing about supersymmetric particles.

Keywords: dark matter, galaxies rotation curves, SPARC, rotating disk

Procedia PDF Downloads 49
260 TLR4 Gene Polymorphism and Biochemical Markers as a Tool to Identify Risk of Osteoporosis in Women from Karachi

Authors: Rozeena Baig, R. Rehana Rehman, Rifat Ahmed

Abstract:

Background: Osteoporosis, characterized by low bone mineral density, poses a global health concern. Diagnosis increases the likelihood of developing osteoporosis, a multifactorial disorder marked by low bone mass, elevating the risk of fractures in the lumbar spine, femoral neck, hip, vertebras, and distal forearm, particularly in postmenopausal women due to bone loss influenced by various pathophysiological factors. Objectives: The aim is to investigate the association of serum cytokine, bone turnover marker, bone mineral density and TLR4 gene polymorphism in pre and post-menopausal women and to find if any of these can be the potential predictor of osteoporosis in postmenopausal women. Material and methods: The study participants consisted of Group A (n=91) healthy pre-menopausal women and Group B (n=102) healthy postmenopausal women having ≥ 5 years’ history of menopause. ELISA was performed for cytokine (TNFα) and bone turnover markers (carboxytelopeptides), respectively. Bone Mineral Density (BMD)was measured through a dual X-ray absorptiometry (DEXA) scan. Toll-like Receptors 4 (TLR4) gene polymorphisms (A896G; Asp299Gly) and (C1196T; Thr399Ile) were investigated by PCR and Sanger sequencing. Results: Statistical analysis reveals a positive correlation of age and BMI with T scores in the premenopausal group, whereas in post-menopausal group found a significant negative correlation between age and T-score at hip (r = - 0.352**), spine (r = - .306**), and femoral neck (r = - 0.344**) and a significant negative correlation of BMI with TNF-α (- 0.316**). No association and significant differences were observed for TLR4 genotype and allele frequencies among studied groups However, both SNPs exhibited significant association with each other. Conclusions: This study concludes that BMI, BMD and TNF-α are the potential predictors of osteoporosis in post-menopausal women. However, CTX and TLR4 gene polymorphism did not appear as potential predictors of bone loss in this study and apparently cannot help in predicting bone loss in post-menopausal women.

Keywords: osteoporosis, post-menopausal, pre-menopausal woemn, genetics mutaiont, TLR4 genepolymorphsum

Procedia PDF Downloads 13
259 Psychological Factors Predicting Social Distance during the COVID-19 Pandemic: An Empirical Investigation

Authors: Calogero Lo Destro

Abstract:

Numerous nations around the world are facing exceptional challenges in employing measures to stop the spread of COVID-19. Following the recommendations of the World Health Organization, a series of preventive measures have been adopted. However, individuals must comply with these rules and recommendations in order to make these measures effective. While COVID-19 was climaxing, it seemed of crucial importance to analyze which psychosocial factors contribute to the acceptance of such preventive behavior, thus favoring the management of COVID-19 worldwide health crisis. In particular, the identification of aspects related to obstacles and facilitation of adherence to social distancing has been considered crucial in the containment of the virus spread. Since the virus was firstly detected in China, Asian people could be considered a relevant outgroup targeted for exclusion. We also hypothesized social distance could be influenced by characteristics of the target, such as smiling or coughing. 260 participants participated in this research on a voluntary basis. They filled a survey designed to explore a series of COVID-19 measures (such as exposure to virus and fear of infection). We also assessed participants state and trait anxiety. The dependent variable was social distance, based on a measure of seating distance designed ad hoc for the present work. Our hypothesis that participants could report greater distance in response to Asian people was not confirmed. On the other hand, significantly lower distance in response to smiling compared to coughing targets was reported. Adopting a regression analysis model, we found that participants' social distance, in response to both coughing and smiling targets, was predicted by fear of infection and by the perception COVID-19 could become a pandemic. Social distance in response to the coughing target was also significantly and positively predicted by age and state anxiety. In summary, the present work has sought to identify a set of psychological variables, which may still be predictive of social distancing.

Keywords: COVID-19, social distancing, health, preventive behaviors, risk of infection

Procedia PDF Downloads 100
258 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed

Authors: M. P. Tripathi, Priti Tiwari

Abstract:

Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.

Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield

Procedia PDF Downloads 350
257 Simulation of Antimicrobial Resistance Gene Fate in Narrow Grass Hedges

Authors: Marzieh Khedmati, Shannon L. Bartelt-Hunt

Abstract:

Vegetative Filter Strips (VFS) are used for controlling the volume of runoff and decreasing contaminant concentrations in runoff before entering water bodies. Many studies have investigated the role of VFS in sediment and nutrient removal, but little is known about their efficiency for the removal of emerging contaminants such as antimicrobial resistance genes (ARGs). Vegetative Filter Strip Modeling System (VFSMOD) was used to simulate the efficiency of VFS in this regard. Several studies demonstrated the ability of VFSMOD to predict reductions in runoff volume and sediment concentration moving through the filters. The objectives of this study were to calibrate the VFSMOD with experimental data and assess the efficiency of the model in simulating the filter behavior in removing ARGs (ermB) and tylosin. The experimental data were obtained from a prior study conducted at the University of Nebraska (UNL) Rogers Memorial Farm. Three treatment factors were tested in the experiments, including manure amendment, narrow grass hedges and rainfall events. Sediment Delivery Ratio (SDR) was defined as the filter efficiency and the related experimental and model values were compared to each other. The VFS Model generally agreed with the experimental results and as a result, the model was used for predicting filter efficiencies when the runoff data are not available. Narrow Grass Hedges (NGH) were shown to be effective in reducing tylosin and ARGs concentration. The simulation showed that the filter efficiency in removing ARGs is different for different soil types and filter lengths. There is an optimum length for the filter strip that produces minimum runoff volume. Based on the model results increasing the length of the filter by 1-meter leads to higher efficiency but widening beyond that decreases the efficiency. The VFSMOD, which was proved to work well in estimation of VFS trapping efficiency, showed confirming results for ARG removal.

Keywords: antimicrobial resistance genes, emerging contaminants, narrow grass hedges, vegetative filter strips, vegetative filter strip modeling system

Procedia PDF Downloads 106
256 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 40
255 Distraction from Pain: An fMRI Study on the Role of Age-Related Changes in Executive Functions

Authors: Katharina M. Rischer, Angelika Dierolf, Ana M. Gonzalez-Roldan, Pedro Montoya, Fernand Anton, Marian van der Meulen

Abstract:

Even though age has been associated with increased and prolonged episodes of pain, little is known about potential age-related changes in the ˈtop-downˈ modulation of pain, such as cognitive distraction from pain. The analgesic effects of distraction result from competition for attentional resources in the prefrontal cortex (PFC), a region that is also involved in executive functions. Given that the PFC shows pronounced age-related atrophy, distraction may be less effective in reducing pain in older compared to younger adults. The aim of this study was to investigate the influence of aging on task-related analgesia and the underpinning neural mechanisms, with a focus on the role of executive functions in distraction from pain. In a first session, 64 participants (32 young adults: 26.69 ± 4.14 years; 32 older adults: 68.28 ± 7.00 years) completed a battery of neuropsychological tests. In a second session, participants underwent a pain distraction paradigm, while fMRI images were acquired. In this paradigm, participants completed a low (0-back) and a high (2-back) load condition of a working memory task while receiving either warm or painful thermal stimuli to their lower arm. To control for age-related differences in sensitivity to pain and perceived task difficulty, stimulus intensity, and task speed were individually calibrated. Results indicate that both age groups showed significantly reduced activity in a network of regions involved in pain processing when completing the high load distraction task; however, young adults showed a larger neural distraction effect in different parts of the insula and the thalamus. Moreover, better executive functions, in particular inhibitory control abilities, were associated with a larger behavioral and neural distraction effect. These findings clearly demonstrate that top-down control of pain is affected in older age, and could explain the higher vulnerability for older adults to develop chronic pain. Moreover, our findings suggest that the assessment of executive functions may be a useful tool for predicting the efficacy of cognitive pain modulation strategies in older adults.

Keywords: executive functions, cognitive pain modulation, fMRI, PFC

Procedia PDF Downloads 114
254 Phenology and Size in the Social Sweat Bee, Halictus ligatus, in an Urban Environment

Authors: Rachel A. Brant, Grace E. Kenny, Paige A. Muñiz, Gerardo R. Camilo

Abstract:

The social sweat bee, Halictus ligatus, has been documented to alter its phenology as a response to changes in temporal dynamics of resources. Furthermore, H. ligatus exhibits polyethism in natural environments as a consequence of the variation in resources. Yet, we do not know if or how H. ligatus responds to these variations in urban environments. As urban environments become much more widespread, and human population is expected to reach nine billion by 2050, it is crucial to distinguish how resources are allocated by bees in cities. We hypothesize that in urban regions, where floral availability varies with human activity, H. ligatus will exhibit polyethism in order to match the extremely localized spatial variability of resources. We predict that in an urban setting, where resources vary both spatially and temporally, the phenology of H. ligatus will alter in response to these fluctuations. This study was conducted in Saint Louis, Missouri, at fifteen sites each varying in size and management type (community garden, urban farm, prairie restoration). Bees were collected by hand netting from 2013-2016. Results suggest that the largest individuals, mostly gynes, occurred in lower income neighborhood community gardens in May and August. We used a model averaging procedure, based on information theoretical methods, to determine a best model for predicting bee size. Our results suggest that month and locality within the city are the best predictors of bee size. Halictus ligatus was observed to comply with the predictions of polyethism from 2013 to 2015. However, in 2016 there was an almost complete absence of the smallest worker castes. This is a significant deviation from what is expected under polyethism. This could be attributed to shifts in planting decisions, shifts in plant-pollinator matches, or local climatic conditions. Further research is needed to determine if this divergence from polyethism is a new strategy for the social sweat bee as climate continues to alter or a response to human dominated landscapes.

Keywords: polyethism, urban environment, phenology, social sweat bee

Procedia PDF Downloads 196
253 Systematic and Meta-Analysis of Navigation in Oral and Maxillofacial Trauma and Impact of Machine Learning and AI in Management

Authors: Shohreh Ghasemi

Abstract:

Introduction: Managing oral and maxillofacial trauma is a multifaceted challenge, as it can have life-threatening consequences and significant functional and aesthetic impact. Navigation techniques have been introduced to improve surgical precision to meet this challenge. A machine learning algorithm was also developed to support clinical decision-making regarding treating oral and maxillofacial trauma. Given these advances, this systematic meta-analysis aims to assess the efficacy of navigational techniques in treating oral and maxillofacial trauma and explore the impact of machine learning on their management. Methods: A detailed and comprehensive analysis of studies published between January 2010 and September 2021 was conducted through a systematic meta-analysis. This included performing a thorough search of Web of Science, Embase, and PubMed databases to identify studies evaluating the efficacy of navigational techniques and the impact of machine learning in managing oral and maxillofacial trauma. Studies that did not meet established entry criteria were excluded. In addition, the overall quality of studies included was evaluated using Cochrane risk of bias tool and the Newcastle-Ottawa scale. Results: Total of 12 studies, including 869 patients with oral and maxillofacial trauma, met the inclusion criteria. An analysis of studies revealed that navigation techniques effectively improve surgical accuracy and minimize the risk of complications. Additionally, machine learning algorithms have proven effective in predicting treatment outcomes and identifying patients at high risk for complications. Conclusion: The introduction of navigational technology has great potential to improve surgical precision in oral and maxillofacial trauma treatment. Furthermore, developing machine learning algorithms offers opportunities to improve clinical decision-making and patient outcomes. Still, further studies are necessary to corroborate these results and establish the optimal use of these technologies in managing oral and maxillofacial trauma

Keywords: trauma, machine learning, navigation, maxillofacial, management

Procedia PDF Downloads 40
252 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 71
251 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 221
250 A Comparative Study of Specific Assessment Criteria Related to Commercial Vehicle Drivers

Authors: Nur Syahidatul Idany Abdul Ghani, Rahizar Ramli, Jamilah Mohamad, Ahmad Saifizul, Mohamed Rehan Karim

Abstract:

Increasing fatalities in road accidents in Malaysia over the last 10 years are quite alarming. Based on Malaysian Institute of Road Safety Research (Miros) latest research ‘Predicting Malaysian Road Fatalities for year 2020; it is predicted that road fatalities in Malaysia for 2015 is 8,780 and 10,716 for the year 2020 which 30 percent of fatalities were caused by accidents involving commercial vehicles. Government, related agencies and NGOs have continuously and persistently work to reduce the statistics through enforcement, educating the public, training to drivers, road safety campaigns, advertisements etc. However, the trend of casualties does not show encouraging pattern but instead, steadily growing. Thus, this comparative study reviews the literature pertaining on method of measurement used to evaluate commercial drivers competency. In several studies driving competency has been assessed with different assessment based on the license procedures and requirements according to the country regulation. The assessment criteria that has been establish for commercial drivers generally focus on driving tasks and assessment e.g. theory test, medical test and road assessment rather than driving competency test or physical test. Realizing the importance of specific assessment test for drivers competency this comparative study reviews the most discussed literature related to competency assessment method to identify competency of the drivers include (1. judgement and reaction, 2. skill of drivers, 3. experiences and fatigue). The concluding analysis of this paper is a comparative table for assessment methodology to access driver’s competency. A comparative study is a further discussion reviewing past literature to provide an overview on existing assessment test and potential subject matters that can be identified for further studies to increase awareness of the drivers, passengers as well as the authorities about the importance of competent drivers in order to improve safety in commercial vehicles.

Keywords: commercial vehicles, driver’s competency, specific assessment

Procedia PDF Downloads 414
249 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 44
248 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 218
247 A Study on the Effect of Different Climate Conditions on Time of Balance of Bleeding and Evaporation in Plastic Shrinkage Cracking of Concrete Pavements

Authors: Hasan Ziari, Hassan Fazaeli, Seyed Javad Vaziri Kang Olyaei, Asma Sadat Dabiri

Abstract:

The presence of cracks in concrete pavements is a place for the ingression of corrosive substances, acids, oils, and water into the pavement and reduces its long-term durability and level of service. One of the causes of early cracks in concrete pavements is the plastic shrinkage. This shrinkage occurs due to the formation of negative capillary pressures after the equilibrium of the bleeding and evaporation rates at the pavement surface. These cracks form if the tensile stresses caused by the restrained shrinkage exceed the tensile strength of the concrete. Different climate conditions change the rate of evaporation and thus change the balance time of the bleeding and evaporation, which changes the severity of cracking in concrete. The present study examined the relationship between the balance time of bleeding and evaporation and the area of cracking in the concrete slabs using the standard method ASTM C1579 in 27 different environmental conditions by using continuous video recording and digital image analyzing. The results showed that as the evaporation rate increased and the balance time decreased, the crack severity significantly increased so that by reducing the balance time from the maximum value to its minimum value, the cracking area increased more than four times. It was also observed that the cracking area- balance time curve could be interpreted in three sections. An examination of these three parts showed that the combination of climate conditions has a significant effect on increasing or decreasing these two variables. The criticality of a single factor cannot cause the critical conditions of plastic cracking. By combining two mild environmental factors with a severe climate factor (in terms of surface evaporation rate), a considerable reduction in balance time and a sharp increase in cracking severity can be prevented. The results of this study showed that balance time could be an essential factor in controlling and predicting plastic shrinkage cracking in concrete pavements. It is necessary to control this factor in the case of constructing concrete pavements in different climate conditions.

Keywords: bleeding and cracking severity, concrete pavements, climate conditions, plastic shrinkage

Procedia PDF Downloads 122
246 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation

Authors: Khashayar Nasrifar

Abstract:

Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.

Keywords: correlation, corresponding state principle, ionic liquid, density

Procedia PDF Downloads 104
245 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 94
244 Empirical Modeling and Optimization of Laser Welding of AISI 304 Stainless Steel

Authors: Nikhil Kumar, Asish Bandyopadhyay

Abstract:

Laser welding process is a capable technology for forming the automobile, microelectronics, marine and aerospace parts etc. In the present work, a mathematical and statistical approach is adopted to study the laser welding of AISI 304 stainless steel. A robotic control 500 W pulsed Nd:YAG laser source with 1064 nm wavelength has been used for welding purpose. Butt joints are made. The effects of welding parameters, namely; laser power, scanning speed and pulse width on the seam width and depth of penetration has been investigated using the empirical models developed by response surface methodology (RSM). Weld quality is directly correlated with the weld geometry. Twenty sets of experiments have been conducted as per central composite design (CCD) design matrix. The second order mathematical model has been developed for predicting the desired responses. The results of ANOVA indicate that the laser power has the most significant effect on responses. Microstructural analysis as well as hardness of the selected weld specimens has been carried out to understand the metallurgical and mechanical behaviour of the weld. Average micro-hardness of the weld is observed to be higher than the base metal. Higher hardness of the weld is the resultant of grain refinement and δ-ferrite formation in the weld structure. The result suggests that the lower line energy generally produce fine grain structure and improved mechanical properties than the high line energy. The combined effects of input parameters on responses have been analyzed with the help of developed 3-D response surface and contour plots. Finally, multi-objective optimization has been conducted for producing weld joint with complete penetration, minimum seam width and acceptable welding profile. Confirmatory tests have been conducted at optimum parametric conditions to validate the applied optimization technique.

Keywords: ANOVA, laser welding, modeling and optimization, response surface methodology

Procedia PDF Downloads 272
243 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite

Authors: Georgios Koronis, Arlindo Silva

Abstract:

This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.

Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites

Procedia PDF Downloads 182
242 Measures of Corporate Governance Efficiency on the Quality Level of Value Relevance Using IFRS and Corporate Governance Acts: Evidence from African Stock Exchanges

Authors: Tchapo Tchaga Sophia, Cai Chun

Abstract:

This study measures the efficiency level of corporate governance to improve the quality level of value relevance in the resolution of market value efficiency increase issues, transparency problems, risk frauds, agency problems, investors' confidence, and decision-making issues using IFRS and Corporate Governance Acts (CGA). The final sample of this study contains 3660 firms from ten countries' stock markets from 2010 to 2020. Based on the efficiency market theory and the positive accounting theory, this paper uses multiple econometrical methods (DID method, multivariate and univariate regression methods) and models (Ohlson model and compliance index model) regression to see the incidence results of corporate governance mechanisms on the value relevance level under the influence of IFRS and corporate governance regulations act framework in Africa's stock exchanges for non-financial firms. The results on value relevance show that the corporate governance system, strengthened by the adoption of IFRS and enforcement of new corporate governance regulations, produces better financial statement information when its compliance level is high. And that is both value-relevant and comparable to results in more developed markets. Similar positive and significant results were obtained when predicting future book value per share and earnings per share through the determination of stock price and stock return. The findings of this study have important implications for regulators, academics, investors, and other users regarding the effects of IFRS and the Corporate Governance Act (CGA) on the relationship between corporate governance and accounting information relevance in the African stock market. The contributions of this paper are also based on the uniqueness of the data used in this study. The unique data is from Africa, and not all existing findings provide evidence for Africa and of the DID method used to examine the relationship between corporate governance and value relevance on African stock exchanges.

Keywords: corporate governance value, market efficiency value, value relevance, African stock market, stock return-stock price

Procedia PDF Downloads 38
241 The Mediating Role of Social Connectivity in the Effect of Positive Personality and Alexithymia on Life Satisfaction: Analysis Based on Structural Equation Model

Authors: Yulin Zhang, Kaixi Dong, Guozhen Zhao

Abstract:

Background: Different levels of life satisfaction are associated with some individual differences. Understanding the mechanism between them will help to enhance an individual’s well-being. On the one hand, traditional personality such as extraversion has been considered as the most stable and effective factor in predicting life satisfaction to the author’s best knowledge. On the other, individual emotional difference, such as alexithymia (difficulties identifying and describing one’s own feelings), is also closely related to life satisfaction. With the development of positive psychology, positive personalities such as virtues attract wide attention. And according to the broaden-and-build theory, social connectivity may mediate between emotion and life satisfaction. Therefore, the current study aims to explore the mediating role of social connectivity in the effect of positive personality and alexithymia on life satisfaction. Method: This study was conducted with 318 healthy Chinese college students whose age range from 18 to 30. Positive personality (including interpersonal, vitality, and cautiousness) was measured by the Chinese version of Values in Action Inventory of Strengths (VIA-IS). Alexithymia was measured by the Toronto Alexithymia Scale (TAS), and life satisfaction was measured by Satisfaction With Life Scale (SWLS). And social connectivity was measured by six items which have been used in previous studies. Each scale showed high reliability and validity. The mediating model was examined in Mplus 7.2 within a structural equation modeling (SEM) framework. Findings: The model fitted well and results revealed that both positive personality (95% confidence interval of indirect effect was [0.023, 0.097]) and alexithymia (95% confidence interval of indirect effect was [-0.270, -0.089]) predicted life satisfaction level significantly through social connectivity. Also, only positive personality significantly and directly predicted life satisfaction compared to alexithymia (95% confidence interval of direct effect was [0.109, 0.260]). Conclusion: Alexithymia predicts life satisfaction only through social connectivity, which emphasizes the importance of social bonding in enhancing the well-being of Chinese college students with alexithymia. And the positive personality can predict life satisfaction directly or through social connectivity, which provides implications for enhancing the well-being of Chinese college students by cultivating their virtue and positive psychological quality.

Keywords: alexithymia, life satisfaction, positive personality, social connectivity

Procedia PDF Downloads 146
240 Central Vascular Function and Relaxibility in Beta-thalassemia Major Patients vs. Sickle Cell Anemia Patients by Abdominal Aorta and Aortic Root Speckle Tracking Echocardiography

Authors: Gehan Hussein, Hala Agha, Rasha Abdelraof, Marina George, Antoine Fakhri

Abstract:

Background: β-Thalassemia major (TM) and sickle cell disease (SCD) are inherited hemoglobin disorders resulting in chronic hemolytic anemia. Cardiovascular involvement is an important cause of morbidity and mortality in these groups of patients. The narrow border is between overt myocardial dysfunction and clinically silent left ventricular (LV) and / or right ventricular (RV) dysfunction in those patients. 3 D Speckle tracking echocardiography (3D STE) is a novel method for the detection of subclinical myocardial involvement. We aimed to study myocardial affection in SCD and TM using 3D STE, comparing it with conventional echocardiography, correlate it with serum ferritin level and lactate dehydrogenase (LDH). Methodology: Thirty SCD and thirty β TM patients, age range 4-18 years, were compared to 30 healthy age and sex matched control group. Cases were subjected to clinical examination, laboratory measurement of hemoglobin level, serum ferritin, and LDH. Transthoracic color Doppler echocardiography, 3D STE, tissue Doppler echocardiography, and aortic speckle tracking were performed. Results: significant reduction in global longitudinal strain (GLS), global circumferential strain (GCS), and global area strain (GAS) in SCD and TM than control (P value <0.001) there was significantly lower aortic speckle tracking in patients with TM and SCD than control (P value< 0.001). LDH was significantly higher in SCD than both TM and control and it correlated significantly positive mitral inflow E, (p value:0.022 and 0.072. r: 0.416 and -0.333 respectively) lateral E/E’ (p value.<0.001and 0.818. r. 0.618 and -0. 044.respectively) and septal E/E’ (p value 0.007 and 0.753& r value 0.485 and -0.060 respectively) in SCD but not TM and significant negative correlation between LDH and aortic root speckle tracking (value 0.681& r. -0.078.). The potential diagnostic accuracy of LDH in predicting vascular dysfunction as represented by aortic root GCS with a sensitivity 74% and aortic root GCS was predictive of LV dysfunction in SCD patients with sensitivity 100% Conclusion: 3D STE LV and RV systolic dysfunction in spite of their normal values by conventional echocardiography. SCD showed significantly lower right ventricular dysfunction and aortic root GCS than TM and control. LDH can be used to screen patients for cardiac dysfunction in SCD, not in TM

Keywords: thalassemia major, sickle cell disease, 3d speckle tracking echocardiography, LDH

Procedia PDF Downloads 142
239 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 189
238 Physics-Informed Neural Network for Predicting Strain Demand in Inelastic Pipes under Ground Movement with Geometric and Soil Resistance Nonlinearities

Authors: Pouya Taraghi, Yong Li, Nader Yoosef-Ghodsi, Muntaseer Kainat, Samer Adeeb

Abstract:

Buried pipelines play a crucial role in the transportation of energy products such as oil, gas, and various chemical fluids, ensuring their efficient and safe distribution. However, these pipelines are often susceptible to ground movements caused by geohazards like landslides, fault movements, lateral spreading, and more. Such ground movements can lead to strain-induced failures in pipes, resulting in leaks or explosions, leading to fires, financial losses, environmental contamination, and even loss of human life. Therefore, it is essential to study how buried pipelines respond when traversing geohazard-prone areas to assess the potential impact of ground movement on pipeline design. As such, this study introduces an approach called the Physics-Informed Neural Network (PINN) to predict the strain demand in inelastic pipes subjected to permanent ground displacement (PGD). This method uses a deep learning framework that does not require training data and makes it feasible to consider more realistic assumptions regarding existing nonlinearities. It leverages the underlying physics described by differential equations to approximate the solution. The study analyzes various scenarios involving different geohazard types, PGD values, and crossing angles, comparing the predictions with results obtained from finite element methods. The findings demonstrate a good agreement between the results of the proposed method and the finite element method, highlighting its potential as a simulation-free, data-free, and meshless alternative. This study paves the way for further advancements, such as the simulation-free reliability assessment of pipes subjected to PGD, as part of ongoing research that leverages the proposed method.

Keywords: strain demand, inelastic pipe, permanent ground displacement, machine learning, physics-informed neural network

Procedia PDF Downloads 37