Search results for: predictive accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4531

Search results for: predictive accuracy

271 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 87
270 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.

Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine

Procedia PDF Downloads 8
269 Construction and Validation of Allied Bank-Teller Aptitude Test

Authors: Muhammad Kashif Fida

Abstract:

In the bank, teller’s job (cash officer) is highly important and critical as at one end it requires soft and brisk customer services and on the other side, handling cash with integrity. It is always challenging for recruiters to hire competent and trustworthy tellers. According to author’s knowledge, there is no comprehensive test available that may provide assistance in recruitment in Pakistan. So there is a dire need of a psychometric battery that could provide support in recruitment of potential candidates for the teller’ position. So, the aim of the present study was to construct ABL-Teller Aptitude Test (ABL-TApT). Three major phases have been designed by following American Psychological Association’s guidelines. The first phase was qualitative, indicators of the test have been explored by content analysis of the a) teller’s job descriptions (n=3), b) interview with senior tellers (n=6) and c) interview with HR personals (n=4). Content analysis of above yielded three border constructs; i). Personality, ii). Integrity/honesty, iii). Professional Work Aptitude. Identified indicators operationalized and statements (k=170) were generated using verbatim. It was then forwarded to the five experts for review of content validity. They finalized 156 items. In the second phase; ABL-TApT (k=156) administered on 323 participants through a computer application. The overall reliability of the test shows significant alpha coefficient (α=.81). Reliability of subscales have also significant alpha coefficients. Confirmatory Factor Analysis (CFA) performed to estimate the construct validity, confirms four main factors comprising of eight personality traits (Confidence, Organized, Compliance, Goal-oriented, Persistent, Forecasting, Patience, Caution), one Integrity/honesty factor, four factors of professional work aptitude (basic numerical ability and perceptual accuracy of letters, numbers and signature) and two factors for customer services (customer services, emotional maturity). Values of GFI, AGFI, NNFI, CFI, RFI and RMSEA are in recommended range depicting significant model fit. In third phase concurrent validity evidences have been pursued. Personality and integrity part of this scale has significant correlations with ‘conscientiousness’ factor of NEO-PI-R, reflecting strong concurrent validity. Customer services and emotional maturity have significant correlations with ‘Bar-On EQI’ showing another evidence of strong concurrent validity. It is concluded that ABL-TAPT is significantly reliable and valid battery of tests, will assist in objective recruitment of tellers and help recruiters in finding a more suitable human resource.

Keywords: concurrent validity, construct validity, content validity, reliability, teller aptitude test, objective recruitment

Procedia PDF Downloads 226
268 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 415
267 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly

Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto

Abstract:

Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.

Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau

Procedia PDF Downloads 76
266 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 80
265 Real-Time Working Environment Risk Analysis with Smart Textiles

Authors: Jose A. Diaz-Olivares, Nafise Mahdavian, Farhad Abtahi, Kaj Lindecrantz, Abdelakram Hafid, Fernando Seoane

Abstract:

Despite new recommendations and guidelines for the evaluation of occupational risk assessments and their prevention, work-related musculoskeletal disorders are still one of the biggest causes of work activity disruption, productivity loss, sick leave and chronic work disability. It affects millions of workers throughout Europe, with a large-scale economic and social burden. These specific efforts have failed to produce significant results yet, probably due to the limited availability and high costs of occupational risk assessment at work, especially when the methods are complex, consume excessive resources or depend on self-evaluations and observations of poor accuracy. To overcome these limitations, a pervasive system of risk assessment tools in real time has been developed, which has the characteristics of a systematic approach, with good precision, usability and resource efficiency, essential to facilitate the prevention of musculoskeletal disorders in the long term. The system allows the combination of different wearable sensors, placed on different limbs, to be used for data collection and evaluation by a software solution, according to the needs and requirements in each individual working environment. This is done in a non-disruptive manner for both the occupational health expert and the workers. The creation of this solution allows us to attend different research activities that require, as an essential starting point, the recording of data with ergonomic value of very diverse origin, especially in real work environments. The software platform is here presented with a complimentary smart clothing system for data acquisition, comprised of a T-shirt containing inertial measurement units (IMU), a vest sensorized with textile electronics, a wireless electrocardiogram (ECG) and thoracic electrical bio-impedance (TEB) recorder and a glove sensorized with variable resistors, dependent on the angular position of the wrist. The collected data is processed in real-time through a mobile application software solution, implemented in commercially available Android-based smartphones and tablet platforms. Based on the collection of this information and its analysis, real-time risk assessment and feedback about postural improvement is possible, adapted to different contexts. The result is a tool which provides added value to ergonomists and occupational health agents, as in situ analysis of postural behavior can assist in a quantitative manner in the evaluation of work techniques and the occupational environment.

Keywords: ergonomics, mobile technologies, risk assessment, smart textiles

Procedia PDF Downloads 117
264 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 169
263 Storms Dynamics in the Black Sea in the Context of the Climate Changes

Authors: Eugen Rusu

Abstract:

The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.

Keywords: Black Sea, extreme storms, SWAN simulations, waves

Procedia PDF Downloads 248
262 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 65
261 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality

Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard

Abstract:

Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.

Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus

Procedia PDF Downloads 573
260 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach

Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis

Abstract:

The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.

Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company

Procedia PDF Downloads 113
259 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 93
258 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling

Authors: Vibha Devi, Shabina Khanam

Abstract:

Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.

Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation

Procedia PDF Downloads 141
257 Explore and Reduce the Performance Gap between Building Modelling Simulations and the Real World: Case Study

Authors: B. Salehi, D. Andrews, I. Chaer, A. Gillich, A. Chalk, D. Bush

Abstract:

With the rapid increase of energy consumption in buildings in recent years, especially with the rise in population and growing economies, the importance of energy savings in buildings becomes more critical. One of the key factors in ensuring energy consumption is controlled and kept at a minimum is to utilise building energy modelling at the very early stages of the design. So, building modelling and simulation is a growing discipline. During the design phase of construction, modelling software can be used to estimate a building’s projected energy consumption, as well as building performance. The growth in the use of building modelling software packages opens the door for improvements in the design and also in the modelling itself by introducing novel methods such as building information modelling-based software packages which promote conventional building energy modelling into the digital building design process. To understand the most effective implementation tools, research projects undertaken should include elements of real-world experiments and not just rely on theoretical and simulated approaches. Upon review of the related studies undertaken, it’s evident that they are mostly based on modelling and simulation, which can be due to various reasons such as the more expensive and time-consuming nature of real-time data-based studies. Taking in to account the recent rise of building energy software modelling packages and the increasing number of studies utilising these methods in their projects and research, the accuracy and reliability of these modelling software packages has become even more crucial and critical. This Energy Performance Gap refers to the discrepancy between the predicted energy savings and the realised actual savings, especially after buildings implement energy-efficient technologies. There are many different software packages available which are either free or have commercial versions. In this study, IES VE (Integrated Environmental Solutions Virtual Environment) is used as it is a common Building Energy Modeling and Simulation software in the UK. This paper describes a study that compares real time results with those in a virtual model to illustrate this gap. The subject of the study is a north west facing north-west (345°) facing, naturally ventilated, conservatory within a domestic building in London is monitored during summer to capture real-time data. Then these results are compared to the virtual results of IES VE, which is a commonly used building energy modelling and simulation software in the UK. In this project, the effect of the wrong position of blinds on overheating is studied as well as providing new evidence of Performance Gap. Furthermore, the challenges of drawing the input of solar shading products in IES VE will be considered.

Keywords: building energy modelling and simulation, integrated environmental solutions virtual environment, IES VE, performance gap, real time data, solar shading products

Procedia PDF Downloads 139
256 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 84
255 Development of an Appropriate Method for the Determination of Multiple Mycotoxins in Pork Processing Products by UHPLC-TCFLD

Authors: Jason Gica, Yi-Hsieng Samuel Wu, Deng-Jye Yang, Yi-Chen Chen

Abstract:

Mycotoxins, harmful secondary metabolites produced by certain fungi species, pose significant risks to animals and humans worldwide. Their stable properties lead to contamination during grain harvesting, transportation, and storage, as well as in processed food products. The prevalence of mycotoxin contamination has attracted significant attention due to its adverse impact on food safety and global trade. The secondary contamination pathway from animal products has been identified as an important route of exposure, posing health risks for livestock and humans consuming contaminated products. Pork, one of the highly consumed meat products in Taiwan according to the National Food Consumption Database, plays a critical role in the nation's diet and economy. Given its substantial consumption, pork processing products are a significant component of the food supply chain and a potential source of mycotoxin contamination. This study is paramount for formulating effective regulations and strategies to mitigate mycotoxin-related risks in the food supply chain. By establishing a reliable analytical method, this research contributes to safeguarding public health and enhancing the quality of pork processing products. The findings will serve as valuable guidance for policymakers, food industries, and consumers to ensure a safer food supply chain in the face of emerging mycotoxin challenges. An innovative and efficient analytical approach is proposed using Ultra-High Performance Liquid Chromatography coupled with Temperature Control Fluorescence Detector Light (UHPLC-TCFLD) to determine multiple mycotoxins in pork meat samples due to its exceptional capacity to detect multiple mycotoxins at the lowest levels of concentration, making it highly sensitive and reliable for comprehensive mycotoxin analysis. Additionally, its ability to simultaneously detect multiple mycotoxins in a single run significantly reduces the time and resources required for analysis, making it a cost-effective solution for monitoring mycotoxin contamination in pork processing products. The research aims to optimize the efficient mycotoxin QuEChERs extraction method and rigorously validate its accuracy and precision. The results will provide crucial insights into mycotoxin levels in pork processing products.

Keywords: multiple-mycotoxin analysis, pork processing products, QuEChERs, UHPLC-TCFLD, validation

Procedia PDF Downloads 69
254 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping

Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco

Abstract:

Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.

Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction

Procedia PDF Downloads 224
253 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 216
252 Effects of Oxytocin on Neural Response to Facial Emotion Recognition in Schizophrenia

Authors: Avyarthana Dey, Naren P. Rao, Arpitha Jacob, Chaitra V. Hiremath, Shivarama Varambally, Ganesan Venkatasubramanian, Rose Dawn Bharath, Bangalore N. Gangadhar

Abstract:

Objective: Impaired facial emotion recognition is widely reported in schizophrenia. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. However, its effect on facial emotion recognition deficits seen in schizophrenia is not well explored. In this study, we examined the effect of intranasal OXT on processing facial emotions and its neural correlates in patients with schizophrenia. Method: 12 male patients (age= 31.08±7.61 years, education= 14.50±2.20 years) participated in this single-blind, counterbalanced functional magnetic resonance imaging (fMRI) study. All participants underwent three fMRI scans; one at baseline, one each after single dose 24IU intranasal OXT and intranasal placebo. The order of administration of OXT and placebo were counterbalanced and subject was blind to the drug administered. Participants performed a facial emotion recognition task presented in a block design with six alternating blocks of faces and shapes. The faces depicted happy, angry or fearful emotions. The images were preprocessed and analyzed using SPM 12. First level contrasts comparing recognition of emotions and shapes were modelled at individual subject level. A group level analysis was performed using the contrasts generated at the first level to compare the effects of intranasal OXT and placebo. The results were thresholded at uncorrected p < 0.001 with a cluster size of 6 voxels. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. Results: Compared to placebo, intranasal OXT attenuated activity in inferior temporal, fusiform and parahippocampal gyri (BA 20), premotor cortex (BA 6), middle frontal gyrus (BA 10) and anterior cingulate gyrus (BA 24) and enhanced activity in the middle occipital gyrus (BA 18), inferior occipital gyrus (BA 19), and superior temporal gyrus (BA 22). There were no significant differences between the conditions on the accuracy scores of emotion recognition between baseline (77.3±18.38), oxytocin (82.63 ± 10.92) or Placebo (76.62 ± 22.67). Conclusion: Our results provide further evidence to the modulatory effect of oxytocin in patients with schizophrenia. Single dose oxytocin resulted in significant changes in activity of brain regions involved in emotion processing. Future studies need to examine the effectiveness of long-term treatment with OXT for emotion recognition deficits in patients with schizophrenia.

Keywords: recognition, functional connectivity, oxytocin, schizophrenia, social cognition

Procedia PDF Downloads 220
251 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion

Procedia PDF Downloads 218
250 An A-Star Approach for the Quickest Path Problem with Time Windows

Authors: Christofas Stergianos, Jason Atkin, Herve Morvan

Abstract:

As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.

Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling

Procedia PDF Downloads 231
249 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 234
248 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine

Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland

Abstract:

The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.

Keywords: force-velocity, leg-press, power-velocity, profiling, reliability

Procedia PDF Downloads 58
247 Empirical Testing of Hofstede’s Measures of National Culture: A Study in Four Countries

Authors: Nebojša Janićijević

Abstract:

At the end of 1970s, Dutch researcher Geert Hofstede, had conducted an enormous empirical research on the differences between national cultures. In his huge research, he had identified four dimensions of national culture according to which national cultures differ and determined the index for every dimension of national culture for each country that took part in his research. The index showed a country’s position on the continuum between the two extreme poles of the cultural dimensions. Since more than 40 years have passed since Hofstede's research, there is a doubt that, due to the changes in national cultures during that period, they are no longer a good basis for research. The aim of this research is to check the validity of Hofstee's indices of national culture The empirical study conducted in the branches of a multinational company in Serbia, France, the Netherlands and Denmark aimed to determine whether Hofstede’s measures of national culture dimensions are still valid. The sample consisted of 155 employees of one multinational company, where 40 employees came from three countries and 35 employees were from Serbia. The questionnaire that analyzed the positions of national cultures according to the Hofstede’s four dimensions was formulated on the basis of the initial Hofstede’s questionnaire, but it was much shorter and significantly simplified comparing to the original questionnaire. Such instrument had already been used in earlier researches. A statistical analysis of the obtained questionnaire results was done by a simple calculation of the frequency of the provided answers. Due to the limitations in methodology, sample size, instrument, and applied statistical methods, the aim of the study was not to explicitly test the accuracy Hofstede’s indexes but to enlighten the general position of the four observed countries in national culture dimensions and their mutual relations. The study results have indicated that the position of the four observed national cultures (Serbia, France, the Netherlands and Denmark) is precisely the same in three out of four dimensions as Hofstede had described in his research. Furthermore, the differences between national cultures and the relative relations between their positions in three dimensions of national culture correspond to Hofstede’s results. The only deviation from Hofstede’s results is concentrated around the masculinity–femininity dimension. In addition, the study revealed that the degree of power distance is a determinant when choosing leadership style. It has been found that national cultures with high power distance, like Serbia and France, favor one of the two authoritative leadership styles. On the other hand, countries with low power distance, such as the Netherlands and Denmark, prefer one of the forms of democratic leadership styles. This confirms Hofstede’s premises about the impact of power distance on leadership style. The key contribution of the study is that Hofstede’s national culture indexes are still a reliable tool for measuring the positions of countries in national culture dimensions, and they can be applied in the cross-cultural research in management. That was at least the case with four observed countries: Serbia, France, the Netherlands, and Denmark.

Keywords: national culture, leadership styles, power distance, collectivism, masculinity, uncertainty avoidance

Procedia PDF Downloads 74
246 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 121
245 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 216
244 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model

Authors: Kang Lin Peng

Abstract:

Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.

Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature

Procedia PDF Downloads 129
243 A Study of Kinematical Parameters I9N Instep Kicking in Soccer

Authors: Abdolrasoul Daneshjoo

Abstract:

Introduction: Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill. Purpose: The aim of the present study was to study of a few kinematical parameters in instep kicking from 3 and 5 meter distance among the male and female elite soccer players. Methods: 24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 500 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders. Results and Discussion: Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact. Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.

Keywords: biomechanics, kinematics, soccer, instep kick, male, female

Procedia PDF Downloads 415
242 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 122