Search results for: statistical tables
3453 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 3033452 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia
Authors: Nino Paresashvili, Nino Abesadze
Abstract:
The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.Keywords: unemployment, analysis, methods, tendencies, regulation mechanisms
Procedia PDF Downloads 3773451 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models
Authors: Ozan Kahraman, Hao Feng
Abstract:
Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7
Procedia PDF Downloads 3663450 Statistical Approach to Identify Stress and Biases Impairing Decision-Making in High-Risk Industry
Authors: Ph. Fauquet-Alekhine
Abstract:
Decision-making occurs several times an hour when working in high risk industry and an erroneous choice might have undesirable outcomes for people and the environment surrounding the industrial plant. Industrial decisions are very often made in a context of acute stress. Time pressure is a crucial stressor leading decision makers sometimes to boost up the decision-making process and if it is not possible then shift to the simplest strategy. We thus found it interesting to update the characterization of the stress factors impairing decision-making at Chinon Nuclear Power Plant (France) in order to optimize decision making contexts and/or associated processes. The investigation was based on the analysis of reports addressing safety events over the last 3 years. Among 93 reports, those explicitly addressing decision-making issues were identified. Characterization of each event was undertaken in terms of three criteria: stressors, biases impairing decision making and weaknesses of the decision-making process. The statistical analysis showed that biases were distributed over 10 possibilities among which the hypothesis confirmation bias was clearly salient. No significant correlation was found between criteria. The analysis indicated that the main stressor was time pressure and highlights an unexpected form of stressor: the trust asymmetry principle of the expert. The analysis led to the conclusion that this stressor impaired decision-making from a psychological angle rather than from a physiological angle: it induces defensive bias of self-esteem, self-protection associated with a bias of confirmation. This leads to the hypothesis that this stressor can intervene in some cases without being detected, and to the hypothesis that other stressors of the same kind might occur without being detected too. Further investigations addressing these hypotheses are considered. The analysis also led to the conclusion that dealing with these issues implied i) decision-making methods being well known to the workers and automated and ii) the decision-making tools being well known and strictly applied. Training was thus adjusted.Keywords: bias, expert, high risk industry, stress.
Procedia PDF Downloads 1123449 Water Scarcity in the Gomti Nagar Area under the Impact of Climate Changes and Assessment for Groundwater Management
Authors: Rajkumar Ghosh
Abstract:
Climate change has led to decreased water availability in the Gomti Nagar area of Uttar Pradesh, India. Climate change has reduced the amount of precipitation and increased the rate of evaporation. The region is heavily reliant on surface water sources (Gomti river, Sharda Canal) and groundwater. Efficient management of groundwater resources is crucial for addressing water shortages. These may include: Exploring alternative water sources, such as wastewater recycling and desalination, can help augment water supply and reduce dependency on rainfall-dependent sources. Promoting the use of water-efficient technologies in industries, agriculture, and water-efficient infrastructure in urban areas can contribute to reducing water demand and optimizing water use. Incorporating climate change considerations into urban planning and infrastructure development can help ensure water security in the face of future climate uncertainties. Addressing water scarcity in the Gomti Nagar area requires a multi-pronged approach that combines sustainable groundwater management practices, climate change adaptation strategies, and integrated water resource management. By implementing these measures, the region can work towards ensuring a more sustainable and reliable water supply in the context of climate change. Water is the most important natural resource for the existence of living beings in the Earth's ecosystem. On Earth, 1.2 percent of the water is drinkable, but only 0.3 percent is usable by people. Water scarcity is a growing concern in India due to the impact of climate change and over-exploitation of water resources. Excess groundwater withdrawal causes regular declines in groundwater level. Due to city boundary expansion and growing urbanization, the recharge point for groundwater tables is decreasing. Rainwater infiltration into the subsoil is also reduced by unplanned, uneven settlements in urban change.Keywords: climate change, water scarcity, groundwater, rainfall, water supply
Procedia PDF Downloads 833448 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder
Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa
Abstract:
Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami
Procedia PDF Downloads 4933447 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator
Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain
Abstract:
Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.Keywords: percent depth dose, flatness, symmetry, golden beam data
Procedia PDF Downloads 4893446 The Effect of Core Training on Physical Fitness Characteristics in Male Volleyball Players
Authors: Sibel Karacaoglu, Fatma Ç. Kayapinar
Abstract:
The aim of the study is to investigate the effect of the core training program on physical fitness characteristics and body composition in male volleyball players. 26 male university volleyball team players aged between 19 to 24 years who had no health problems and injury participated in the study. Subjects were divided into training (TG) and control groups (CG) as randomly. Data from twenty-one players who completed all training sessions were used for statistical analysis (TG,n=11; CG,n=10). A core training program was applied to the training group three days a week for 10 weeks. On the other hand, the control group did not receive any training. Before and after the 10-week training program, pre- and post-testing comprised of body composition measurements (weight, BMI, bioelectrical impedance analysis) and physical fitness measurements including flexibility (sit and reach test), muscle strength (back, leg and grip strength by dynamometer), muscle endurance (sit-ups and push-ups tests), power (one-legged jump and vertical jump tests), speed (20m sprint, 30m sprint) and balance tests (one-legged standing test) were performed. Changes of pre- and post- test values of the groups were determined by using dependent t test. According to the statistical analysis of data, no significant difference was found in terms of body composition in the both groups for pre- and post- test values. In the training group, all physical fitness measurements improved significantly after core training program (p<0.05) except 30m speed and handgrip strength (p>0.05). On the hand, only 20m speed test values improved after post-test period (p<0.05), but the other physical fitness tests values did not differ (p>0.05) between pre- and post- test measurement in the control group. The results of the study suggest that the core training program has positive effect on physical fitness characteristics in male volleyball players.Keywords: body composition, core training, physical fitness, volleyball
Procedia PDF Downloads 3463445 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 793444 Comparative Assessment on the Impact of Sedatives on the Stress and Anxiety of Patients with a Heart Disease before and during Surgery in Iran
Authors: Farhad Fakoursevom
Abstract:
Heart disease is one of the diseases which is found in abundance today. Various types of surgeries, such as bypasses, angiography, angioplasty, etc., are used to treat patients. People may receive such surgeries, some of which are invasive and some non-invasive, throughout their lives. People might cope with pre-surgery anxiety and stress, which can disrupt their normal life and even reduce the effects of the surgery, so the desired result can not be achieved in surgery. Considering this issue, the present study aimed to do a comparative assessment of people who received sedatives before surgery and people who did not receive sedatives. In terms of the purpose, this is an applied research and descriptive survey in terms of method. The statistical population included patients who underwent surgeries in the specialist heart hospitals of Mashhad, Iran; 60 people were considered as a statistical population, 30 of them received sedatives before surgery, and 30 others had not received sedatives before surgery. Valid and up-to-date articles were systematically used to collect theoretical bases, and a researcher-made questionnaire was used to examine the level of stress and anxiety of people. The questionnaire content validity was assessed by a panel of experts in psychology and medicine. The construct validity was tested using the software. Cronbach's alpha and composite reliability were used for reliability, which shows the appropriate reliability of the questionnaire. SPSS software was used to compare the research results between two groups, and the research findings showed that there is no significant association between the people who received sedatives and those who did not receive sedatives in terms of the amount of stress and anxiety. The longer the time of taking the drugs before the surgery, the more the mental peace of the patients will be. According to the results, it can be said that if we don't need to have an emergency operation and need more time, we have to use sedative drugs with different doses compared to the severity of the surgery, and also in case of a medical emergency such as heart surgery due to a stroke, we have to take advantage of psychological services during and before the operation and sedative drugs so that the patients can control their stress and anxiety and achieve better outcomes.Keywords: sedative drugs, stress, anxiety, surgery
Procedia PDF Downloads 993443 Numerical Modeling and Prediction of Nanoscale Transport Phenomena in Vertically Aligned Carbon Nanotube Catalyst Layers by the Lattice Boltzmann Simulation
Authors: Seungho Shin, Keunwoo Choi, Ali Akbar, Sukkee Um
Abstract:
In this study, the nanoscale transport properties and catalyst utilization of vertically aligned carbon nanotube (VACNT) catalyst layers are computationally predicted by the three-dimensional lattice Boltzmann simulation based on the quasi-random nanostructural model in pursuance of fuel cell catalyst performance improvement. A series of catalyst layers are randomly generated with statistical significance at the 95% confidence level to reflect the heterogeneity of the catalyst layer nanostructures. The nanoscale gas transport phenomena inside the catalyst layers are simulated by the D3Q19 (i.e., three-dimensional, 19 velocities) lattice Boltzmann method, and the corresponding mass transport characteristics are mathematically modeled in terms of structural properties. Considering the nanoscale reactant transport phenomena, a transport-based effective catalyst utilization factor is defined and statistically analyzed to determine the structure-transport influence on catalyst utilization. The tortuosity of the reactant mass transport path of VACNT catalyst layers is directly calculated from the streaklines. Subsequently, the corresponding effective mass diffusion coefficient is statistically predicted by applying the pre-estimated tortuosity factors to the Knudsen diffusion coefficient in the VACNT catalyst layers. The statistical estimation results clearly indicate that the morphological structures of VACNT catalyst layers reduce the tortuosity of reactant mass transport path when compared to conventional catalyst layer and significantly improve consequential effective mass diffusion coefficient of VACNT catalyst layer. Furthermore, catalyst utilization of the VACNT catalyst layer is substantially improved by enhanced mass diffusion and electric current paths despite the relatively poor interconnections of the ion transport paths.Keywords: Lattice Boltzmann method, nano transport phenomena, polymer electrolyte fuel cells, vertically aligned carbon nanotube
Procedia PDF Downloads 2013442 UEMSD Risk Identification: Case Study
Authors: K. Sekulová, M. Šimon
Abstract:
The article demonstrates on a case study how it is possible to identify MSD risk. It is based on a dissertation risk identification model of occupational diseases formation in relation to the work activity that determines what risk can endanger workers who are exposed to the specific risk factors. It is evaluated based on statistical calculations. These risk factors are main cause of upper-extremities musculoskeletal disorders.Keywords: case study, upper-extremity musculoskeletal disorders, ergonomics, risk identification
Procedia PDF Downloads 4993441 Development of DNDC Modelling Method for Evaluation of Carbon Dioxide Emission from Arable Soils in European Russia
Authors: Olga Sukhoveeva
Abstract:
Carbon dioxide (CO2) is the main component of carbon biogeochemical cycle and one of the most important greenhouse gases (GHG). Agriculture, particularly arable soils, are one the largest sources of GHG emission for the atmosphere including CO2.Models may be used for estimation of GHG emission from agriculture if they can be adapted for different countries conditions. The only model used in officially at national level in United Kingdom and China for this purpose is DNDC (DeNitrification-DeComposition). In our research, the model DNDC is offered for estimation of GHG emission from arable soils in Russia. The aim of our research was to create the method of DNDC using for evaluation of CO2 emission in Russia based on official statistical information. The target territory was European part of Russia where many field experiments are located. At the first step of research the database on climate, soil and cropping characteristics for the target region from governmental, statistical, and literature sources were created. All-Russia Research Institute of Hydrometeorological Information – World Data Centre provides open daily data about average meteorological and climatic conditions. It must be calculated spatial average values of maximum and minimum air temperature and precipitation over the region. Spatial average values of soil characteristics (soil texture, bulk density, pH, soil organic carbon content) can be determined on the base of Union state register of soil recourses of Russia. Cropping technologies are published by agricultural research institutes and departments. We offer to define cropping system parameters (annual information about crop yields, amount and types of fertilizers and manure) on the base of the Federal State Statistics Service data. Content of carbon in plant biomass may be calculated via formulas developed and published by Ministry of Natural Resources and Environment of the Russian Federation. At the second step CO2 emission from soil in this region were calculated by DNDC. Modelling data were compared with empirical and literature data and good results were obtained, modelled values were equivalent to the measured ones. It was revealed that the DNDC model may be used to evaluate and forecast the CO2 emission from arable soils in Russia based on the official statistical information. Also, it can be used for creation of the program for decreasing GHG emission from arable soils to the atmosphere. Financial Support: fundamental scientific researching theme 0148-2014-0005 No 01201352499 ‘Solution of fundamental problems of analysis and forecast of Earth climatic system condition’ for 2014-2020; fundamental research program of Presidium of RAS No 51 ‘Climate change: causes, risks, consequences, problems of adaptation and regulation’ for 2018-2020.Keywords: arable soils, carbon dioxide emission, DNDC model, European Russia
Procedia PDF Downloads 1913440 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 1883439 Drug Therapy Problems and Associated Factors among Patients with Heart Failure in the Medical Ward of Arba Minch General Hospital, Ethiopia
Authors: Debalke Dale, Bezabh Geneta, Yohannes Amene, Yordanos Bergene, Mohammed Yimam
Abstract:
Background: A drug therapy problem (DTP) is an event or circumstance that involves drug therapies that actually or potentially interfere with the desired outcome and requires professional judgment to resolve. Heart failure is an emerging worldwide threat whose prevalence and health loss burden constantly increase, especially in the young and in low-to-middle-income countries. There is a lack of population-based incidence and prevalence of heart failure (HF) studies in sub-Saharan African countries, including Ethiopia. Objective: The aim of this study was designed to assess drug therapy problems and associated factors among patients with HF in the medical ward of Arba Minch General Hospital(AGH), Ethiopia, from June 5 to August 20, 2022. Methods: A retrospective cross-sectional study was conducted among 180 patients with HF who were admitted to the medical ward of AGH. Data were collected from patients' cards by using questionnaires. The data were categorized and analyzed by using SPSS version 25.0 software, and data were presented in tables and words based on the nature of the data. Result: Out of the total, 85 (57.6%) were females, and 113 (75.3%) patients were aged over fifty years. Of the 150 study participants, 86 (57.3%) patients had at least one DTP identified, and a total of 116 DTPs were identified, which is 0.77 DTPs per patient. The most common types of DTP were unnecessary drug therapy (32%), followed by the need for additional drug therapy (36%), and dose too low (15%). Patients who used polypharmacy were 5.86 (AOR) times more likely to develop DTPs than those who did not (95% CI = 1.625–16.536, P = 0.005), and patients with more co-morbid conditions developed 3.68 (AOR) times more DTPs than those who had fewer co-morbidities (95% CI = 1.28–10.5, P = 0.015). Conclusion: The results of this study indicated that drug therapy problems were common among medical ward patients with heart failure. These problems are adversely affecting the treatment outcomes of patients, so it requires the special attention of healthcare professionals to optimize them.Keywords: heart failure, drug therapy problems, Arba Minch general hospital, Ethiopia
Procedia PDF Downloads 1053438 Embedding the Dimensions of Sustainability into City Information Modelling
Authors: Ali M. Al-Shaery
Abstract:
The purpose of this paper is to address the functions of sustainability dimensions in city information modelling and to present the required sustainability criteria that support establishing a sustainable planning framework for enhancing existing cities and developing future smart cities. The paper is divided into two sections. The first section is based on the examination of a wide and extensive array of cross-disciplinary literature in the last decade and a half to conceptualize the terms ‘sustainable’ and ‘smart city,' and map their associated criteria to city information modelling. The second section is based on analyzing two approaches relating to city information modelling, namely statistical and dynamic approaches, and their suitability in the development of cities’ action plans. The paper argues that the use of statistical approaches to embedding sustainability dimensions in city information modelling have limited value. Despite the popularity of such approaches in addressing other dimensions like utility and service management in development and action plans of the world cities, these approaches are unable to address the dynamics across various city sectors with regards to economic, environmental and social criteria. The paper suggests an integrative dynamic and cross-disciplinary planning approach to embedding sustainability dimensions in city information modelling frameworks. Such an approach will pave the way towards optimal planning and implementation of priority actions of projects and investments. The approach can be used to achieve three main goals: (1) better development and action plans for world cities (2) serve the development of an integrative dynamic and cross-disciplinary framework that incorporates economic, environmental and social sustainability criteria and (3) address areas that require further attention in the development of future sustainable and smart cities. The paper presents an innovative approach for city information modelling and a well-argued, balanced hierarchy of sustainability criteria that can contribute to an area of research which is still in its infancy in terms of development and management.Keywords: information modelling, smart city, sustainable city, sustainability dimensions, sustainability criteria, city development planning
Procedia PDF Downloads 3273437 Advanced Statistical Approaches for Identifying Predictors of Poor Blood Pressure Control: A Comprehensive Analysis Using Multivariable Logistic Regression and Generalized Estimating Equations (GEE)
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Effective management of hypertension remains a critical public health challenge, particularly among racially and ethnically diverse populations. This study employs sophisticated statistical models to rigorously investigate the predictors of poor blood pressure (BP) control, with a specific focus on demographic, socioeconomic, and clinical risk factors. Leveraging a large sample of 19,253 adults drawn from the National Health and Nutrition Examination Survey (NHANES) across three distinct time periods (2013-2014, 2015-2016, and 2017-2020), we applied multivariable logistic regression and generalized estimating equations (GEE) to account for the clustered structure of the data and potential within-subject correlations. Our multivariable models identified significant associations between poor BP control and several key predictors, including race/ethnicity, age, gender, body mass index (BMI), prevalent diabetes, and chronic kidney disease (CKD). Non-Hispanic Black individuals consistently exhibited higher odds of poor BP control across all periods (OR = 1.99; 95% CI: 1.69, 2.36 for the overall sample; OR = 2.33; 95% CI: 1.79, 3.02 for 2017-2020). Younger age groups demonstrated substantially lower odds of poor BP control compared to individuals aged 75 and older (OR = 0.15; 95% CI: 0.11, 0.20 for ages 18-44). Men also had a higher likelihood of poor BP control relative to women (OR = 1.55; 95% CI: 1.31, 1.82), while BMI ≥35 kg/m² (OR = 1.76; 95% CI: 1.40, 2.20) and the presence of diabetes (OR = 2.20; 95% CI: 1.80, 2.68) were associated with increased odds of poor BP management. Further analysis using GEE models, accounting for temporal correlations and repeated measures, confirmed the robustness of these findings. Notably, individuals with chronic kidney disease displayed markedly elevated odds of poor BP control (OR = 3.72; 95% CI: 3.09, 4.48), with significant differences across the survey periods. Additionally, higher education levels and better self-reported diet quality were associated with improved BP control. College graduates exhibited a reduced likelihood of poor BP control (OR = 0.64; 95% CI: 0.46, 0.89), particularly in the 2015-2016 period (OR = 0.48; 95% CI: 0.28, 0.84). Similarly, excellent dietary habits were associated with significantly lower odds of poor BP control (OR = 0.64; 95% CI: 0.44, 0.94), underscoring the importance of lifestyle factors in hypertension management. In conclusion, our findings provide compelling evidence of the complex interplay between demographic, clinical, and socioeconomic factors in predicting poor BP control. The application of advanced statistical techniques such as GEE enhances the reliability of these results by addressing the correlated nature of repeated observations. This study highlights the need for targeted interventions that consider racial/ethnic disparities, clinical comorbidities, and lifestyle modifications in improving BP control outcomes.Keywords: hypertension, blood pressure, NHANES, generalized estimating equations
Procedia PDF Downloads 103436 Self-Image of Police Officers
Authors: Leo Carlo B. Rondina
Abstract:
Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect
Procedia PDF Downloads 2873435 Spatial Analysis of the Impact of City Developments Degradation of Green Space in Urban Fringe Eastern City of Yogyakarta Year 2005-2010
Authors: Pebri Nurhayati, Rozanah Ahlam Fadiyah
Abstract:
In the development of the city often use rural areas that can not be separated from the change in land use that lead to the degradation of urban green space in the city fringe. In the long run, the degradation of green open space this can impact on the decline of ecological, psychological and public health. Therefore, this research aims to (1) determine the relationship between the parameters of the degradation rate of urban development with green space, (2) develop a spatial model of the impact of urban development on the degradation of green open space with remote sensing techniques and Geographical Information Systems in an integrated manner. This research is a descriptive research with data collection techniques of observation and secondary data . In the data analysis, to interpret the direction of urban development and degradation of green open space is required in 2005-2010 ASTER image with NDVI. Of interpretation will generate two maps, namely maps and map development built land degradation green open space. Secondary data related to the rate of population growth, the level of accessibility, and the main activities of each city map is processed into a population growth rate, the level of accessibility maps, and map the main activities of the town. Each map is used as a parameter to map the degradation of green space and analyzed by non-parametric statistical analysis using Crosstab thus obtained value of C (coefficient contingency). C values were then compared with the Cmaximum to determine the relationship. From this research will be obtained in the form of modeling spatial map of the City Development Impact Degradation Green Space in Urban Fringe eastern city of Yogyakarta 2005-2010. In addition, this research also generate statistical analysis of the test results of each parameter to the degradation of green open space in the Urban Fringe eastern city of Yogyakarta 2005-2010.Keywords: spatial analysis, urban development, degradation of green space, urban fringe
Procedia PDF Downloads 3133434 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience
Authors: Nkwenti Mbelli Njah
Abstract:
This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners
Procedia PDF Downloads 1643433 Report of a Realistic Simulation Training in Using Bougie Guide for Endotracheal Intubation
Authors: Cleto J. Sauer Jr., Rita C. Sauer, Chaider G. Andrade, Dóris F. Rabelo
Abstract:
Some patients with COVID-19 disease and difficult airway characteristics undergo to endotracheal intubation (ETI) procedure. The tracheal introducer, known as the bougie guide, can aid ETI in patients with difficult airway pattern. Realistic simulation (RS) is a methodology utilized for healthcare professionals training. To improve skills in using the bougie guide of physicians from Recôncavo da Bahia region in Brazil, during COVID-19 outbreak, RS training was carried out. Simulated scenario included the Nasco Lifeform realistic simulator for ETI and a bougie guide introducer. Training was a capacitation program organized by the Health Department of Bahia State. Objective: To report effects in participants´ self-confidence perception for using bougie guide after a RS based training. Methods: Descriptive study, secondary data extracted from questionnaires. Priority workplace and previous knowledge about bougie were reported on a preparticipation formulary. Participants also completed pre- and post-training qualitative self-assessment (10-point Likert scale) regarding to self-confidence in using bougie guide. Distribution analysis for qualitative data was performed with Wilcoxon Signed Rank Test, and self-confidence increase analysis in frequency contingency tables with Fisher's exact test. Results: From May to June 2020 a total of 36 physicians participated of training, 25 (69%) from primary care setting, 32 (89%) with no previous knowledge about the bougie guide utilization. For those who had previous knowledge about bougie pre-training self-confidence median was 6,5, and 2 for participants who had not. In overall there was an increase in self-confidence median for bougie utilization. Median (variation) before and after training was 2.5 (1-7) vs. 8 (4-10) (p <0.0001). Among those who had no previous knowledge about bougie (n = 32) an increase in self-confidence greater than 3 points for bougie utilization was reported by 31 vs. 1 participants (p = 0.71). Conclusions: Most of participants had no previous knowledge about using the bougie guide. RS training contributed to self-confidence increase for using bougie for ETI procedure. RS methodology can contribute for training in using the bougie guide for ETI procedure during COVID-19 outbreak.Keywords: bougie, confidence, COVID-19, endotracheal intubation, realistic simulation
Procedia PDF Downloads 1443432 Nutrition and Physical Activity Intervention on Health Screening Outcomes for Singaporean Employees: A Worksite Based Randomised Controlled Trial
Authors: Elaine Wong
Abstract:
This research protocol aims to explore and justify the need for nutrition and physical activity intervention to improve health outcomes among SME (Small Medium Enterprise) employees. It was found that the worksite is an ideal and convenient setting for employees to take charge of their health thru active participation in health programmes since they spent a great deal of time at their workplace. This study will examine the impact of both general or/and targeted health interventions in both SME and non-SME companies utilizing the Workplace Health Promotion (WHP) grant over a 12 months period and assessed the improvement in chronic health disease outcomes in Singapore. Random sampling of both non-SME and SME companies will be conducted to undergo health intervention and statistical packages such as Statistical Package for Social Science (SPSS) 25 will be used to examine the impact of both general and targeted interventions on employees who participate and those who do not participate in the intervention and their effects on blood glucose (BG), blood lipid, blood pressure (BP), body mass index (BMI), and body fat percentage. Using focus groups and interviews, the data results will be transcribed to investigate enablers and barriers to workplace health intervention revealed by employees and WHP coordinators that could explain the variation in the health screening results across the organisations. Dietary habits and physical activity levels of the employees participating and not participating in the intervention will be collected before and after intervention to assess any changes in their lifestyle practices. It makes economic sense to study the impact of these interventions on health screening outcomes across various organizations that are existing grant recipients to justify the sustainability of these programmes by the local government. Healthcare policy makers and employers can then tailor appropriate and relevant programmes to manage these escalating chronic health disease conditions which is integral to the competitiveness and productivity of the nation’s workforce.Keywords: chronic diseases, health screening, nutrition and fitness intervention , workplace health
Procedia PDF Downloads 1483431 Green Economy and Environmental Protection Economic Policy Challenges in Georgia
Authors: Gulnaz Erkomaishvili
Abstract:
Introduction. One of the most important issues of state economic policy in the 21st century is the problem of environmental protection. The Georgian government considers the green economy as one of the most important means of sustainable economic development and takes the initiative to implement voluntary measures to promote sustainable development. In this context, it is important to promote the development of ecosystem services, clean production, environmental education and green jobs.The development of the green economy significantly reduces the inefficient use of natural resources, waste generation, emissions into the atmosphere and the discharge of untreated water into bodies of water.It is, therefore, an important instrument in the environmental orientation of sustainable development. Objectives.The aim of the paper is to analyze the current status of the green economy in Georgia and identify effective ways to improve the environmental, economic policy of sustainable development. Methodologies: This paper uses general and specific methods, in particular, analysis, synthesis, induction, deduction, scientific abstraction, comparative and statistical methods, as well as experts’ evaluation. bibliographic research of scientific works and reports of organizations was conducted; Publications of the National Statistics Office of Georgia are used to determine the regularity between analytical and statistical estimations. Also, theoretical and applied research of international organizations and scientist-economists are used. Contributions: The country should implement such an economic policy that ensures the transition to a green economy, in particular, revising water, air and waste laws, strengthening existing environmental management tools and introcing new tools (including economic tools). Perfecting the regulatory legal framework of the environmental impact assessment system, which includes the harmonization of Georgian legislation with the requirements of the European Union. To ensure the protection and rational use of Georgia's forests, emphasis should be placed on sustainable forestry, protection and restoration of forests.Keywords: green economy, environmental protection, environmental protection economic policy, environmental protection policy challanges
Procedia PDF Downloads 653430 Effect of Floods on Water Quality: A Global Review and Analysis
Authors: Apoorva Bamal, Agnieszka Indiana Olbert
Abstract:
Floods are known to be one of the most devastating hydro-climatic events, impacting a wide range of stakeholders in terms of environmental, social and economic losses. With difference in inundation durations and level of impact, flood hazards are of different degrees and strength. Amongst various set of domains being impacted by floods, environmental degradation in terms of water quality deterioration is one of the majorly effected but less highlighted domains across the world. The degraded water quality is caused by numerous natural and anthropogenic factors that are both point and non-point sources of pollution. Therefore, it is essential to understand the nature and source of the water pollution due to flooding. The major impact of floods is not only on the physico-chemical water quality parameters, but also on the biological elements leading to a vivid influence on the aquatic ecosystem. This deteriorated water quality is impacting many water categories viz. agriculture, drinking water, aquatic habitat, and miscellaneous services requiring an appropriate water quality to survive. This study identifies, reviews, evaluates and assesses multiple researches done across the world to determine the impact of floods on water quality. With a detailed statistical analysis of top relevant researches, this study is a synopsis of the methods used in assessment of impact of floods on water quality in different geographies, and identifying the gaps for further abridgement. As per majority of the studies, different flood magnitudes have varied impact on the water quality parameters leading to either increased or decreased values as compared to the recommended values for various categories. There is also an evident shift of the biological elements in the impacted waters leading to a change in its phenology and inhabitants of the specified water body. This physical, chemical and biological water quality degradation by floods is dependent upon its duration, extent, magnitude and flow direction. Therefore, this research provides an overview into the multiple impacts of floods on water quality, along with a roadmap of way forward to an efficient and uniform linkage of floods and impacted water quality dynamics.Keywords: floods, statistical analysis, water pollution, water quality
Procedia PDF Downloads 813429 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments
Authors: Xiaoqin Wang, Li Yin
Abstract:
Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.Keywords: causal effect, point effect, statistical modelling, sequential causal inference
Procedia PDF Downloads 2053428 A Comparative Study of the Proposed Models for the Components of the National Health Information System
Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi
Abstract:
National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.Keywords: National Health Information System, components of the NHIS, Lippeveld Model
Procedia PDF Downloads 4203427 Vehicles Analysis, Assessment and Redesign Related to Ergonomics and Human Factors
Authors: Susana Aragoneses Garrido
Abstract:
Every day, the roads are scenery of numerous accidents involving vehicles, producing thousands of deaths and serious injuries all over the world. Investigations have revealed that Human Factors (HF) are one of the main causes of road accidents in modern societies. Distracted driving (including external or internal aspects of the vehicle), which is considered as a human factor, is a serious and emergent risk to road safety. Consequently, a further analysis regarding this issue is essential due to its transcendence on today’s society. The objectives of this investigation are the detection and assessment of the HF in order to provide solutions (including a better vehicle design), which might mitigate road accidents. The methodology of the project is divided in different phases. First, a statistical analysis of public databases is provided between Spain and The UK. Second, data is classified in order to analyse the major causes involved in road accidents. Third, a simulation between different paths and vehicles is presented. The causes related to the HF are assessed by Failure Mode and Effects Analysis (FMEA). Fourth, different car models are evaluated using the Rapid Upper Body Assessment (RULA). Additionally, the JACK SIEMENS PLM tool is used with the intention of evaluating the Human Factor causes and providing the redesign of the vehicles. Finally, improvements in the car design are proposed with the intention of reducing the implication of HF in traffic accidents. The results from the statistical analysis, the simulations and the evaluations confirm that accidents are an important issue in today’s society, especially the accidents caused by HF resembling distractions. The results explore the reduction of external and internal HF through the global analysis risk of vehicle accidents. Moreover, the evaluation of the different car models using RULA method and the JACK SIEMENS PLM prove the importance of having a good regulation of the driver’s seat in order to avoid harmful postures and therefore distractions. For this reason, a car redesign is proposed for the driver to acquire the optimum position and consequently reducing the human factors in road accidents.Keywords: analysis vehicles, asssesment, ergonomics, car redesign
Procedia PDF Downloads 3353426 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome
Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco
Abstract:
Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index
Procedia PDF Downloads 1353425 Impact of Green Bonds Issuance on Stock Prices: An Event Study on Respective Indian Companies
Authors: S. L. Tulasi Devi, Shivam Azad
Abstract:
The primary objective of this study is to analyze the impact of green bond issuance on the stock prices of respective Indian companies. An event study methodology has been employed to study the effect of green bond issuance. For in-depth study and analysis, this paper used different window frames, including 15-15 days, 10-10 days, 7-7days, 6-6 days, and 5-5 days. Further, for better clarity, this paper also used an uneven window period of 7-5 days. The period of study covered all the companies which issued green bonds during the period of 2017-2022; Adani Green Energy, State Bank of India, Power Finance Corporation, Jain Irrigation, and Rural Electrification Corporation, except Indian Renewable Energy Development Agency and Indian Railway Finance Corporation, because of data unavailability. The paper used all three event study methods as discussed in earlier literature; 1) constant return model, 2) market-adjusted model, and 3) capital asset pricing model. For the fruitful comparison between results, the study considered cumulative average return (CAR) and buy and hold average return (BHAR) methodology. For checking the statistical significance, a two-tailed t-statistic has been used. All the statistical calculations have been performed in Microsoft Excel 2016. The study found that all other companies have shown positive returns on the event day except for the State Bank of India. The results demonstrated that constant return model outperformed compared to the market-adjusted model and CAPM. The p-value derived from all the methods has shown an almost insignificant impact of the issuance of green bonds on the stock prices of respective companies. The overall analysis states that there’s not much improvement in the market efficiency of the Indian Stock Markets.Keywords: green bonds, event study methodology, constant return model, market-adjusted model, CAPM
Procedia PDF Downloads 973424 Waist Circumference-Related Performance of Tense Indices during Varying Pediatric Obesity States and Metabolic Syndrome
Authors: Mustafa Metin Donma
Abstract:
Obesity increases the risk of elevated blood pressure, which is a metabolic syndrome (MetS) component. Waist circumference (WC) is accepted as an indispensable parameter for the evaluation of these health problems. The close relationship of height with blood pressure values revealed the necessity of including height in tense indices. The association of tense indices with WC has also become an increasingly important topic. The purpose of this study was to develop a tense index that could contribute to differential diagnosis of MetS more than the indices previously introduced. One hundred and ninety-four children, aged 06-11 years, were considered to constitute four groups. The study was performed on normal weight (Group 1), overweight+obese (Group 2), morbid obese [without (Group 3) and with (Group 4) MetS findings] children. Children were included in the groups according to the recommendations of World Health Organization based on age- and gender dependent body mass index percentiles. For MetS group, MetS components well-established before were considered. Anthropometric measurements, as well as blood pressure values were taken. Tense indices were computed. The formula for the first tense index was (SP+DP)/2. The second index was Advanced Donma Tense Index (ADTI). The formula for this index was [(SP+DP)/2] * Height. Statistical calculations were performed. 0.05 was accepted as the p value indicating statistical significance. There were no statistically significant differences between the groups for pulse pressure, systolic-to-diastolic pressure ratio and tense index. Increasing values were observed from Group 1 to Group 4 in terms of mean arterial blood pressure and advanced Donma tense index (ADTI), which was highly correlated with WC in all groups except Group 1. Both tense index and ADTI exhibited significant correlations with WC in Group 3. However, in Group 4, ADTI, which includes height parameter in the equation, was unique in establishing a strong correlation with WC. In conclusion, ADTI was suggested as a tense index while investigating children with MetS.Keywords: blood pressure, child, height, metabolic syndrome, waist circumference
Procedia PDF Downloads 56