Search results for: well data integration
22290 Training AI to Be Empathetic and Determining the Psychotype of a Person During a Conversation with a Chatbot
Authors: Aliya Grig, Konstantin Sokolov, Igor Shatalin
Abstract:
The report describes the methodology for collecting data and building an ML model for determining the personality psychotype using profiling and personality traits methods based on several short messages of a user communicating on an arbitrary topic with a chitchat bot. In the course of the experiments, the minimum amount of text was revealed to confidently determine aspects of personality. Model accuracy - 85%. Users' language of communication is English. AI for a personalized communication with a user based on his mood, personality, and current emotional state. Features investigated during the research: personalized communication; providing empathy; adaptation to a user; predictive analytics. In the report, we describe the processes that captures both structured and unstructured data pertaining to a user in large quantities and diverse forms. This data is then effectively processed through ML tools to construct a knowledge graph and draw inferences regarding users of text messages in a comprehensive manner. Specifically, the system analyzes users' behavioral patterns and predicts future scenarios based on this analysis. As a result of the experiments, we provide for further research on training AI models to be empathetic, creating personalized communication for a userKeywords: AI, empathetic, chatbot, AI models
Procedia PDF Downloads 9322289 Anemia Among Pregnant Women in Kuwait: Findings from Kuwait Birth Cohort Study
Authors: Majeda Hammoud
Abstract:
Background: Anemia during pregnancy increases the risk of delivery by cesarean section, low birth weight, preterm birth, perinatal mortality, stillbirth, and maternal mortality. In this study, we aimed to assess the prevalence of anemia in pregnant women and its associated factors in the Kuwait birth cohort study. Methods: The Kuwait birth cohort (N=1108) was a prospective cohort study in which pregnant women were recruited in the third trimester. Data were collected through personal interviews with mothers who attend antenatal care visits, including data on socio-economic status and lifestyle factors. Blood samples were taken after the recruitment to measure multiple laboratory indicators. Clinical data were extracted from the medical records by a clinician including data on comorbidities. Anemia was defined as having Hemoglobin (Hb) <110 g/L with further classification as mild (100-109 g/L), moderate (70-99 g/L), or severe (<70 g/L). Predictors of anemia were classified as underlying or direct factors, and logistic regression was used to investigate their association with anemia. Results: The mean Hb level in the study group was 115.21 g/L (95%CI: 114.56- 115.87 g/L), with significant differences between age groups (p=0.034). The prevalence of anemia was 28.16% (95%CI: 25.53-30.91%), with no significant difference by age group (p=0.164). Of all 1108 pregnant women, 8.75% had moderate anemia, and 19.40% had mild anemia, but no pregnant women had severe anemia. In multivariable analysis, getting pregnant while using contraception, adjusted odds ratio (AOR) 1.73(95%CI:1.01-2.96); p=0.046 and current use of supplements, AOR 0.50 (95%CI: 0.26-0.95); p=0.035 were significantly associated with anemia (underlying factors). From the direct factors group, only iron and ferritin levels were significantly associated with anemia (P<0.001). Conclusion: Although the severe form of anemia is low among pregnant women in Kuwait, mild and moderate anemia remains a significant health problem despite free access to antenatal care.Keywords: anemia, pregnancy, hemoglobin, ferritin
Procedia PDF Downloads 5022288 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 23122287 Academic Goal Setting Practices of University Students in Lagos State, Nigeria: Implications for Counselling
Authors: Asikhia Olubusayo Aduke
Abstract:
Students’ inability to set data-based (specific, measurable, attainable, reliable, and time-bound) personal improvement goals threatens their academic success. Hence, the study aimed to investigate year-one students’ academic goal-setting practices at Lagos State University of Education, Nigeria. Descriptive survey research was used in carrying out this study. The study population consisted of 3,101 year-one students of the University. A sample size of five hundred (501) participants was selected through a proportional and simple random sampling technique. The Formative Goal Setting Questionnaire (FGSQ) developed by Research Collaboration (2015) was adapted and used as an instrument for the study. Two main research questions were answered, while two null hypotheses were formulated and tested for the study. The study revealed higher data-based goals for all students than personal improvement goals. Nevertheless, data-based and personal improvement goal-setting for female students was higher than for male students. One sample test statistic and Anova used to analyse data for the two hypotheses also revealed that the mean difference between male and female year one students’ data-based and personal improvement goal-setting formation was statistically significant (p < 0.05). This means year one students’ data-based and personal improvement goals showed significant gender differences. Based on the findings of this study, it was recommended, among others, that therapeutic techniques that can help to change students’ faulty thinking and challenge their lack of desire for personal improvement should be sought to treat students who have problems with setting high personal improvement goals. Counsellors also need to advocate continued research into how to increase the goal-setting ability of male students and should focus more on counselling male students’ goal-setting ability. The main contributions of the study are higher institutions must prioritize early intervention in first-year students' academic goal setting. Researching gender differences in this practice reveals a crucial insight: male students often lag behind in setting meaningful goals, impacting their motivation and performance. Focusing on this demographic with data-driven personal improvement goals can be transformative. By promoting goal setting that is specific, measurable, and focused on self-growth (rather than competition), male students can unlock their full potential. Researchers and counselors play a vital role in detecting and supporting students with lower goal-setting tendencies. By prioritizing this intervention, we can empower all students to set ambitious, personalized goals that ignite their passion for learning and pave the way for academic success.Keywords: academic goal setting, counselling, practice, university, year one students
Procedia PDF Downloads 6222286 Estimating X-Ray Spectra for Digital Mammography by Using the Expectation Maximization Algorithm: A Monte Carlo Simulation Study
Authors: Chieh-Chun Chang, Cheng-Ting Shih, Yan-Lin Liu, Shu-Jun Chang, Jay Wu
Abstract:
With the widespread use of digital mammography (DM), radiation dose evaluation of breasts has become important. X-ray spectra are one of the key factors that influence the absorbed dose of glandular tissue. In this study, we estimated the X-ray spectrum of DM using the expectation maximization (EM) algorithm with the transmission measurement data. The interpolating polynomial model proposed by Boone was applied to generate the initial guess of the DM spectrum with the target/filter combination of Mo/Mo and the tube voltage of 26 kVp. The Monte Carlo N-particle code (MCNP5) was used to tally the transmission data through aluminum sheets of 0.2 to 3 mm. The X-ray spectrum was reconstructed by using the EM algorithm iteratively. The influence of the initial guess for EM reconstruction was evaluated. The percentage error of the average energy between the reference spectrum inputted for Monte Carlo simulation and the spectrum estimated by the EM algorithm was -0.14%. The normalized root mean square error (NRMSE) and the normalized root max square error (NRMaSE) between both spectra were 0.6% and 2.3%, respectively. We conclude that the EM algorithm with transmission measurement data is a convenient and useful tool for estimating x-ray spectra for DM in clinical practice.Keywords: digital mammography, expectation maximization algorithm, X-Ray spectrum, X-Ray
Procedia PDF Downloads 73022285 Single Imputation for Audiograms
Authors: Sarah Beaver, Renee Bryce
Abstract:
Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.Keywords: machine learning, audiograms, data imputations, single imputations
Procedia PDF Downloads 8222284 Role of Imaging in Alzheimer's Disease Trials: Impact on Trial Planning, Patient Recruitment and Retention
Authors: Kohkan Shamsi
Abstract:
Background: MRI and PET are now extensively utilized in Alzheimer's disease (AD) trials for patient eligibility, efficacy assessment, and safety evaluations but including imaging in AD trials impacts site selection process, patient recruitment, and patient retention. Methods: PET/MRI are performed at baseline and at multiple follow-up timepoints. This requires prospective site imaging qualification, evaluation of phantom data, training and continuous monitoring of machines for acquisition of standardized and consistent data. This also requires prospective patient/caregiver training as patients must go to multiple facilities for imaging examinations. We will share our experience form one of the largest AD programs. Lesson learned: Many neurological diseases have a similar presentation as AD or could confound the assessment of drug therapy. The inclusion of wrong patients has ethical and legal issues, and data could be excluded from the analysis. Centralized eligibility evaluation read process will be discussed. Amyloid related imaging abnormalities (ARIA) were observed in amyloid-β trials. FDA recommended regular monitoring of ARIA. Our experience in ARIA evaluations in large phase III study at > 350 sites will be presented. Efficacy evaluation: MRI is utilized to evaluate various volumes of the brain. FDG PET or amyloid PET agents has been used in AD trials. We will share our experience about site and central independent reads. Imaging logistic issues that need to be handled in the planning phase will also be discussed as it can impact patient compliance thereby increasing missing data and affecting study results. Conclusion: imaging must be prospectively planned to include standardizing imaging methodologies, site selection process and selecting assessment criteria. Training should be transparently conducted and documented. Prospective patient/caregiver awareness of imaging requirement is essential for patient compliance and reduction in missing imaging data.Keywords: Alzheimer's disease, ARIA, MRI, PET, patient recruitment, retention
Procedia PDF Downloads 11522283 The Thoughts and Feelings of 60-72 Month Old Children about School and Teacher
Authors: Ayse Ozturk Samur, Gozde Inal Kiziltepe
Abstract:
No matter what level of education it is, starting a school is an exciting process as it includes new experiences. In this process, child steps into a different environment and institution except from the family institution which he was born into and feels secure. That new environment is different from home; it is a social environment which has its own rules, and involves duties and responsibilities that should be fulfilled and new vital experiences. The children who have a positive attitude towards school and like school are more enthusiastic and eager to participate in classroom activities. Moreover, a close relationship with the teacher enables the child to have positive emotions and ideas about the teacher and school and helps children adapt to school easily. In this study, it is aimed to identify children’s perceptions of academic competence, attitudes towards school and ideas about their teachers. In accordance with the aim a mixed method that includes both qualitative and quantitative data collection methods are used. The study is supported with qualitative data after collecting quantitative data. The study group of the research consists of randomly chosen 250 children who are 60-72 month old and attending a preschool institution in a city center located West Anatolian region of Turkey. Quantitative data was collected using Feelings about School scale. The scale consists of 12 items and 4 dimensions; school, teacher, mathematic, and literacy. Reliability and validity study for the scale used in the study was conducted by the researchers with 318 children who were 60-72 months old. For content validity experts’ ideas were asked, for construct validity confirmatory factor analysis was utilized. Reliability of the scale was examined by calculating internal consistency coefficient (Cronbach alpha). At the end of the analyses it was found that FAS is a valid and reliable instrument to identify 60-72 month old children’ perception of their academic competency, attitude toward school and ideas about their teachers. For the qualitative dimension of the study, semi-structured interviews were done with 30 children aged 60-72 month. At the end of the study, it was identified that children’s’ perceptions of their academic competencies and attitudes towards school was medium-level and their ideas about their teachers were high. Based on the semi structured interviews done with children, it is identified that they have a positive perception of school and teacher. That means quantitatively gathered data is supported by qualitatively collected data.Keywords: feelings, preschool education, school, teacher, thoughts
Procedia PDF Downloads 22522282 Technical Sustainable Management: An Instrument to Increase Energy Efficiency in Wastewater Treatment Plants, a Case Study in Jordan
Authors: Dirk Winkler, Leon Koevener, Lamees AlHayary
Abstract:
This paper contributes to the improvement of the municipal wastewater systems in Jordan. An important goal is increased energy efficiency in wastewater treatment plants and therefore lower expenses due to reduced electricity consumption. The chosen way to achieve this goal is through the implementation of Technical Sustainable Management adapted to the Jordanian context. Three wastewater treatment plants in Jordan have been chosen as a case study for the investigation. These choices were supported by the fact that the three treatment plants are suitable for average performance and size. Beyond that, an energy assessment has been recently conducted in those facilities. The project succeeded in proving the following hypothesis: Energy efficiency in wastewater treatment plants can be improved by implementing principles of Technical Sustainable Management adapted to the Jordanian context. With this case study, a significant increase in energy efficiency can be achieved by optimization of operational performance, identifying and eliminating shortcomings and appropriate plant management. Implementing Technical Sustainable Management as a low-cost tool with a comparable little workload, provides several additional benefits supplementing increased energy efficiency, including compliance with all legal and technical requirements, process optimization, but also increased work safety and convenient working conditions. The research in the chosen field continues because there are indications for possible integration of the adapted tool into other regions and sectors. The concept of Technical Sustainable Management adapted to the Jordanian context could be extended to other wastewater treatment plants in all regions of Jordan but also into other sectors including water treatment, water distribution, wastewater network, desalination, or chemical industry.Keywords: energy efficiency, quality management system, technical sustainable management, wastewater treatment
Procedia PDF Downloads 16222281 Determination of Optimum Torque of an Internal Combustion Engine by Exergy Analysis
Authors: Veena Chaudhary, Rakesh P. Gakkhar
Abstract:
In this study, energy and exergy analysis are applied to the experimental data of an internal combustion engine operating on conventional diesel cycle. The experimental data are collected using an engine unit which enables accurate measurements of fuel flow rate, combustion air flow rate, engine load, engine speed and all relevant temperatures. First and second law efficiencies are calculated for different engine speed and compared. Results indicate that the first law (energy) efficiency is maximum at 1700 rpm whereas exergy efficiency is maximum and exergy destruction is minimum at 1900 rpm.Keywords: diesel engine, exergy destruction, exergy efficiency, second law of thermodynamics
Procedia PDF Downloads 33022280 Estimation Atmospheric parameters for Weather Study and Forecast over Equatorial Regions Using Ground-Based Global Position System
Authors: Asmamaw Yehun, Tsegaye Kassa, Addisu Hunegnaw, Martin Vermeer
Abstract:
There are various models to estimate the neutral atmospheric parameter values, such as in-suite and reanalysis datasets from numerical models. Accurate estimated values of the atmospheric parameters are useful for weather forecasting and, climate modeling and monitoring of climate change. Recently, Global Navigation Satellite System (GNSS) measurements have been applied for atmospheric sounding due to its robust data quality and wide horizontal and vertical coverage. The Global Positioning System (GPS) solutions that includes tropospheric parameters constitute a reliable set of data to be assimilated into climate models. The objective of this paper is, to estimate the neutral atmospheric parameters such as Wet Zenith Delay (WZD), Precipitable Water Vapour (PWV) and Total Zenith Delay (TZD) using six selected GPS stations in the equatorial regions, more precisely, the Ethiopian GPS stations from 2012 to 2015 observational data. Based on historic estimated GPS-derived values of PWV, we forecasted the PWV from 2015 to 2030. During data processing and analysis, we applied GAMIT-GLOBK software packages to estimate the atmospheric parameters. In the result, we found that the annual averaged minimum values of PWV are 9.72 mm for IISC and maximum 50.37 mm for BJCO stations. The annual averaged minimum values of WZD are 6 cm for IISC and maximum 31 cm for BDMT stations. In the long series of observations (from 2012 to 2015), we also found that there is a trend and cyclic patterns of WZD, PWV and TZD for all stations.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 6122279 Design and Implementation a Platform for Adaptive Online Learning Based on Fuzzy Logic
Authors: Budoor Al Abid
Abstract:
Educational systems are increasingly provided as open online services, providing guidance and support for individual learners. To adapt the learning systems, a proper evaluation must be made. This paper builds the evaluation model Fuzzy C Means Adaptive System (FCMAS) based on data mining techniques to assess the difficulty of the questions. The following steps are implemented; first using a dataset from an online international learning system called (slepemapy.cz) the dataset contains over 1300000 records with 9 features for students, questions and answers information with feedback evaluation. Next, a normalization process as preprocessing step was applied. Then FCM clustering algorithms are used to adaptive the difficulty of the questions. The result is three cluster labeled data depending on the higher Wight (easy, Intermediate, difficult). The FCM algorithm gives a label to all the questions one by one. Then Random Forest (RF) Classifier model is constructed on the clustered dataset uses 70% of the dataset for training and 30% for testing; the result of the model is a 99.9% accuracy rate. This approach improves the Adaptive E-learning system because it depends on the student behavior and gives accurate results in the evaluation process more than the evaluation system that depends on feedback only.Keywords: machine learning, adaptive, fuzzy logic, data mining
Procedia PDF Downloads 19622278 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier
Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez
Abstract:
The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses
Procedia PDF Downloads 6522277 Impact of Applying Bag House Filter Technology in Cement Industry on Ambient Air Quality - Case Study: Alexandria Cement Company
Authors: Haggag H. Mohamed, Ghatass F. Zekry, Shalaby A. Elsayed
Abstract:
Most sources of air pollution in Egypt are of anthropogenic origin. Alexandria Governorate is located at north of Egypt. The main contributing sectors of air pollution in Alexandria are industry, transportation and area source due to human activities. Alexandria includes more than 40% of the industrial activities in Egypt. Cement manufacture contributes a significant amount to the particulate pollution load. Alexandria Portland Cement Company (APCC) surrounding was selected to be the study area. APCC main kiln stack Total Suspended Particulate (TSP) continuous monitoring data was collected for assessment of dust emission control technology. Electro Static Precipitator (ESP) was fixed on the cement kiln since 2002. The collected data of TSP for first quarter of 2012 was compared to that one in first quarter of 2013 after installation of new bag house filter. In the present study, based on these monitoring data and metrological data a detailed air dispersion modeling investigation was carried out using the Industrial Source Complex Short Term model (ISC3-ST) to find out the impact of applying new bag house filter control technology on the neighborhood ambient air quality. The model results show a drastic reduction of the ambient TSP hourly average concentration from 44.94μg/m3 to 5.78μg/m3 which assures the huge positive impact on the ambient air quality by applying bag house filter technology on APCC cement kilnKeywords: air pollution modeling, ambient air quality, baghouse filter, cement industry
Procedia PDF Downloads 26922276 Wake Effects of Wind Turbines and Its Impacts on Power Curve Measurements
Authors: Sajan Antony Mathew, Bhukya Ramdas
Abstract:
Abstract—The impetus of wind energy deployment over the last few decades has seen potential sites being harvested very actively for wind farm development. Due to the scarce availability of highly potential sites, the turbines are getting more optimized in its location wherein minimum spacing between the turbines are resorted without comprising on the optimization of its energy yield. The optimization of the energy yield from a wind turbine is achieved by effective micrositing techniques. These time-tested techniques which are applied from site to site on terrain conditions that meet the requirements of the International standard for power performance measurements of wind turbines result in the positioning of wind turbines for optimized energy yields. The international standard for Power Curve Measurements has rules of procedure and methodology to evaluate the terrain, obstacles and sector for measurements. There are many challenges at the sites for complying with the requirements for terrain, obstacles and sector for measurements. Studies are being attempted to carry out these measurements within the scope of the international standard as various other procedures specified in alternate standards or the integration of LIDAR for Power Curve Measurements are in the nascent stage. The paper strives to assist in the understanding of the fact that if positioning of a wind turbine at a site is based on an optimized output, then there are no wake effects seen on the power curve of an adjacent wind turbine. The paper also demonstrates that an invalid sector for measurements could be used in the analysis in alteration to the requirement as per the international standard for power performance measurements. Therefore the paper strives firstly to demonstrate that if a wind turbine is optimally positioned, no wake effects are seen and secondly the sector for measurements in such a case could include sectors which otherwise would have to be excluded as per the requirements of International standard for power performance measurements.Keywords: micrositing, optimization, power performance, wake effects
Procedia PDF Downloads 46122275 Application of Single Subject Experimental Designs in Adapted Physical Activity Research: A Descriptive Analysis
Authors: Jiabei Zhang, Ying Qi
Abstract:
The purpose of this study was to develop a descriptive profile of the adapted physical activity research using single subject experimental designs. All research articles using single subject experimental designs published in the journal of Adapted Physical Activity Quarterly from 1984 to 2013 were employed as the data source. Each of the articles was coded in a subcategory of seven categories: (a) the size of sample; (b) the age of participants; (c) the type of disabilities; (d) the type of data analysis; (e) the type of designs, (f) the independent variable, and (g) the dependent variable. Frequencies, percentages, and trend inspection were used to analyze the data and develop a profile. The profile developed characterizes a small portion of research articles used single subject designs, in which most researchers used a small sample size, recruited children as subjects, emphasized learning and behavior impairments, selected visual inspection with descriptive statistics, preferred a multiple baseline design, focused on effects of therapy, inclusion, and strategy, and measured desired behaviors more often, with a decreasing trend over years.Keywords: adapted physical activity research, single subject experimental designs, physical education, sport science
Procedia PDF Downloads 46722274 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 16222273 The Revised Completion of Student Internship Report by Goal Mapping
Authors: Faizah Herman
Abstract:
This study aims to explore the attitudes and behavior of goal mapping performed by the student in completing the internship report revised on time. The approach is phenomenological research with qualitative methods. Data sources include observation, interviews and questionnaires, focus group discussions. Research subject 5 students who have completed the internship report revisions in a timely manner. The analysis technique is an interactive model of Miles&Huberman data analysis techniques. The results showed that the students have a goal of mapping that includes the ultimate goal, formulate goals by identifying what are the things that need to be done, action to be taken and what kind of support is needed from the environment.Keywords: goal mapping, revision internship report, students, Brawijaya
Procedia PDF Downloads 39622272 Conceptualizing a Strategic Facilities Management Decision Framework for Heritage Building Maintenance Management
Authors: Adegoriola Mayowa I., Lai Joseph H. K., Yung Esther H. K., Chan Edwin H. K.
Abstract:
Heritage buildings (HBs) as structures with historical and architectural relevance that form an integral part of contemporary society. These buildings deserve to be protected for as long as possible to retain their significance. Therefore, the need to prioritize HB maintenance management is pertinent. However, the decision-making process of HBMM can be relatively daunting. The decision-making challenge may be attributed to the multiple 'stakeholders' expectation and requirement which needs to be met. To this end, professionals in the built environment have identified the need to apply the strategic concept of facilities management (FM) in decision making. Furthermore, the different maintenance dimensions have been applied to maintenance management of residential, commercial, and health facilities. Unfortunately, these different maintenance approaches, such as FM, sustainable FM, urban FM, green FM, and strategic FM, are yet to be fully explored in the decision-making process of HBMM. To bridge this gap, this study focuses on developing a framework for strategic decision-making HBMM, which helps achieve HBMM sustainability. At the study's inception, relevant works of literature in the domains of HBMM and FM were conducted. This review helped in the identification of contemporary maintenance practices and their applicability to HBMM. Afterward, a conceptual framework to aid decision-making in HBMM was developed. This framework integrated the concept of FM scope (people, place, process, and technology) while ensuring that decisions plans were made at strategic, tactical, and operational levels. Also, the different characteristics of HBs and stakeholders' requirements were considered in the framework. The conceptual framework presents a holistic guide for professionals in HBMM to ensure that decision processes and outcomes are practical and efficient. It also contributes to the existing body of knowledge on the integration of FM in HBMM. Furthermore, it will serve as a basis for future studies by applying the conceptualized framework in actual cases.Keywords: decision-making, facility management, strategy, sustainability, heritage building, maintenance
Procedia PDF Downloads 13822271 The Relationships between Market Orientation and Competitiveness of Companies in Banking Sector
Authors: Patrik Jangl, Milan Mikuláštík
Abstract:
The objective of the paper is to measure and compare market orientation of Swiss and Czech banks, as well as examine statistically the degree of influence it has on competitiveness of the institutions. The analysis of market orientation is based on the collecting, analysis and correct interpretation of the data. Descriptive analysis of market orientation describe current situation. Research of relation of competitiveness and market orientation in the sector of big international banks is suggested with the expectation of existence of a strong relationship. Partially, the work served as reconfirmation of suitability of classic methodologies to measurement of banks’ market orientation. Two types of data were gathered. Firstly, by measuring subjectively perceived market orientation of a company and secondly, by quantifying its competitiveness. All data were collected from a sample of small, mid-sized and large banks. We used numerical secondary character data from the international statistical financial Bureau Van Dijk’s BANKSCOPE database. Statistical analysis led to the following results. Assuming classical market orientation measures to be scientifically justified, Czech banks are statistically less market-oriented than Swiss banks. Secondly, among small Swiss banks, which are not broadly internationally active, small relationship exist between market orientation measures and market share based competitiveness measures. Thirdly, among all Swiss banks, a strong relationship exists between market orientation measures and market share based competitiveness measures. Above results imply existence of a strong relation of this measure in sector of big international banks. A strong statistical relationship has been proven to exist between market orientation measures and equity/total assets ratio in Switzerland.Keywords: market orientation, competitiveness, marketing strategy, measurement of market orientation, relation between market orientation and competitiveness, banking sector
Procedia PDF Downloads 47622270 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 11622269 Bridging the Gap Between Student Needs and Labor Market Requirements in the Translation Industry in Saudi Arabia
Authors: Sultan Samah A Almjlad
Abstract:
The translation industry in Saudi Arabia is experiencing significant shifts driven by Vision 2030, which aims to diversify the economy and enhance international engagement. This change highlights the need for translators who are skilled in various languages and cultures, playing a crucial role in the nation's global integration efforts. However, there's a notable gap between the skills taught in academic institutions and what the job market demands. Many translation programs in Saudi universities don't align well with industry needs, resulting in graduates who may not meet employer expectations. To tackle this challenge, it's essential to thoroughly analyze the market to identify the key skills required, especially in sectors like legal, medical, technical, and audiovisual translation. At the same time, existing translation programs need to be evaluated to see if they cover necessary topics and provide practical training. Involving stakeholders such as translation agencies, professionals, and students is crucial to gather diverse perspectives. Identifying discrepancies between academic offerings and market demands will guide the development of targeted strategies. These strategies may include enriching curricula with industry-specific content, integrating emerging technologies like machine translation and CAT tools, and establishing partnerships with industry players to offer practical training opportunities and internships. Industry-led workshops and seminars can provide students with valuable insights, and certification programs can validate their skills. By aligning academic programs with industry needs, Saudi Arabia can build a skilled workforce of translators, supporting its economic diversification goals under Vision 2030. This alignment benefits both students and the industry, contributing to the growth of the translation sector and the overall development of the country.Keywords: translation industry, briging gap, labor market, requirements
Procedia PDF Downloads 3722268 Seismological Studies in Some Areas in Egypt
Authors: Gamal Seliem, Hassan Seliem
Abstract:
Aswan area is one of the important areas in Egypt and because it encompasses the vital engineering structure of the High dam, so it has been selected for the present study. The study of the crustal deformation and gravity associated with earthquake activity in the High Dam area of great importance for the safety of the High Dam and its economic resources. This paper deals with using micro-gravity, precise leveling and GPS data for geophysical and geodetically studies. For carrying out the detailed gravity survey in the area, were established for studying the subsurface structures. To study the recent vertical movements, a profile of 10 km length joins the High Dam and Aswan old dam were established along the road connecting the two dams. This profile consists of 35 GPS/leveling stations extending along the two sides of the road and on the High Dam body. Precise leveling was carried out with GPS and repeated micro-gravity survey in the same time. GPS network consisting of nine stations was established for studying the recent crustal movements. Many campaigns from December 2001 to December 2014 were performed for collecting the gravity, leveling and GPS data. The main aim of this work is to study the structural features and the behavior of the area, as depicted from repeated micro-gravity, precise leveling and GPS measurements. The present work focuses on the analysis of the gravity, leveling and GPS data. The gravity results of the present study investigate and analyze the subsurface geologic structures and reveal to there be minor structures; features and anomalies are taking W-E and N-S directions. The geodetic results indicated lower rates of the vertical and horizontal displacements and strain values. This may be related to the stability of the area.Keywords: repeated micro-gravity changes, precise leveling, GPS data, Aswan High Dam
Procedia PDF Downloads 44822267 HcDD: The Hybrid Combination of Disk Drives in Active Storage Systems
Authors: Shu Yin, Zhiyang Ding, Jianzhong Huang, Xiaojun Ruan, Xiaomin Zhu, Xiao Qin
Abstract:
Since large-scale and data-intensive applications have been widely deployed, there is a growing demand for high-performance storage systems to support data-intensive applications. Compared with traditional storage systems, next-generation systems will embrace dedicated processor to reduce computational load of host machines and will have hybrid combinations of different storage devices. The advent of flash- memory-based solid state disk has become a critical role in revolutionizing the storage world. However, instead of simply replacing the traditional magnetic hard disk with the solid state disk, it is believed that finding a complementary approach to corporate both of them is more challenging and attractive. This paper explores an idea of active storage, an emerging new storage configuration, in terms of the architecture and design, the parallel processing capability, the cooperation of other machines in cluster computing environment, and a disk configuration, the hybrid combination of different types of disk drives. Experimental results indicate that the proposed HcDD achieves better I/O performance and longer storage system lifespan.Keywords: arallel storage system, hybrid storage system, data inten- sive, solid state disks, reliability
Procedia PDF Downloads 44822266 Regional Flood-Duration-Frequency Models for Norway
Authors: Danielle M. Barna, Kolbjørn Engeland, Thordis Thorarinsdottir, Chong-Yu Xu
Abstract:
Design flood values give estimates of flood magnitude within a given return period and are essential to making adaptive decisions around land use planning, infrastructure design, and disaster mitigation. Often design flood values are needed at locations with insufficient data. Additionally, in hydrologic applications where flood retention is important (e.g., floodplain management and reservoir design), design flood values are required at different flood durations. A statistical approach to this problem is a development of a regression model for extremes where some of the parameters are dependent on flood duration in addition to being covariate-dependent. In hydrology, this is called a regional flood-duration-frequency (regional-QDF) model. Typically, the underlying statistical distribution is chosen to be the Generalized Extreme Value (GEV) distribution. However, as the support of the GEV distribution depends on both its parameters and the range of the data, special care must be taken with the development of the regional model. In particular, we find that the GEV is problematic when developing a GAMLSS-type analysis due to the difficulty of proposing a link function that is independent of the unknown parameters and the observed data. We discuss these challenges in the context of developing a regional QDF model for Norway.Keywords: design flood values, bayesian statistics, regression modeling of extremes, extreme value analysis, GEV
Procedia PDF Downloads 7222265 Healthy and Smart Building Projects
Authors: Ali A. Karakhan
Abstract:
Stakeholders in the architecture, engineering, and construction (AEC) industry have been always searching for strategies to develop, design, and construct healthy and smart building projects. Healthy and smart building projects require that the building process including design and construction be altered and carefully implemented in order to bring about sustainable outcomes throughout the facility lifecycle. Healthy and smart building projects are expected to positively influence organizational success and facility performance across the project lifecycle leading to superior outcomes in terms of people, economy, and the environment. The present study aims to identify potential strategies that AEC organizations can implement to achieve healthy and smart building projects. Drivers and barriers for healthy and smart building features are also examined. The study findings indicate that there are three strategies to advance the development of healthy and smart building projects: (1) the incorporation of high-quality products and low chemical-emitting materials, (2) the integration of innovative designs, methods, and practices, and (3) the adoption of smart technology throughout the facility lifecycle. Satisfying external demands, achievement of a third-party certification, obtaining financial incentives, and a desire to fulfill professional duty are identified as the key drivers for developing healthy and smart building features; whereas, lack of knowledge and training, time/cost constrains, preference for/adherence to customary practices, and unclear business case for why healthy buildings are advantageous are recognized as the primary barriers toward a wider diffusion of healthy and smart building projects. The present study grounded in previous engineering, medical, and public health research provides valuable technical and practical recommendations for facility owners and industry professionals interested in pursuing sustainable, yet healthy and smart building projects.Keywords: healthy buildings, smart construction, innovative designs, sustainable projects
Procedia PDF Downloads 15922264 EDM for Prediction of Academic Trends and Patterns
Authors: Trupti Diwan
Abstract:
Predicting student failure at school has changed into a difficult challenge due to both the large number of factors that can affect the reduced performance of students and the imbalanced nature of these kinds of data sets. This paper surveys the two elements needed to make prediction on Students’ Academic Performances which are parameters and methods. This paper also proposes a framework for predicting the performance of engineering students. Genetic programming can be used to predict student failure/success. Ranking algorithm is used to rank students according to their credit points. The framework can be used as a basis for the system implementation & prediction of students’ Academic Performance in Higher Learning Institute.Keywords: classification, educational data mining, student failure, grammar-based genetic programming
Procedia PDF Downloads 42222263 An Overview of Pakistani Shales for Shale Gas Exploration and Comparison to North American Shale Plays
Authors: Ghulam Sohail, Christopher Hawkes
Abstract:
Pakistan has been facing a growing energy crisis for the last decade, and the government is seeking new horizons for increasing oil and gas production to reduce the gap between supply and demand. Recent developments in technologies to produce natural gas from shales at economical rates has unlocked new horizons for hydrocarbon exploration and development throughout the world. Operating companies in the U.S.A. and Canada have been particularly successful at producing shale gas, so comparing against the properties of shale gas reservoirs in these countries is used for an initial assessment of prospective shale gas reservoirs in other parts of the world. In this study, selected source rocks of Pakistan are evaluated for their shale gas potential using analogs selected from various North American shales for which data have been published. Published data for Pakistani shales were compiled, then assessed and supplemented through consultation with industry professionals. Pakistani formations reviewed are the Datta (shaly sandstone), Hangu (sandy shale), Patala (sandy shale), Ranikot (shaly sandstone), Sembar (sandy shale) and Lower Goru (shaly sandstone) formations, all of which are known source rocks in the Indus Basin. For this study, available geological, geochemical, petrophysical and elastic parameters have been investigated and are correlated specifically with the eight most active shale gas plays of the U.S.A., while data for other North American shale gas plays are used for general discussion on prospective Pakistani shales. The results show that the geological and geochemical parameters of all the Pakistani shales reviewed in this work are promising regarding their shale gas. However, more petrophysical and geomechanical data are required before conclusions on economic production from these shales can be made with confidence.Keywords: Canada shale gas, Indus Basin, Pakistani shales, U.S.A shale gas
Procedia PDF Downloads 20522262 Minimum Vertices Dominating Set Algorithm for Secret Sharing Scheme
Authors: N. M. G. Al-Saidi, K. A. Kadhim, N. A. Rajab
Abstract:
Over the past decades, computer networks and data communication system has been developing fast, so, the necessity to protect a transmitted data is a challenging issue, and data security becomes a serious problem nowadays. A secret sharing scheme is a method which allows a master key to be distributed among a finite set of participants, in such a way that only certain authorized subsets of participants to reconstruct the original master key. To create a secret sharing scheme, many mathematical structures have been used; the most widely used structure is the one that is based on graph theory (graph access structure). Subsequently, many researchers tried to find efficient schemes based on graph access structures. In this paper, we propose a novel efficient construction of a perfect secret sharing scheme for uniform access structure. The dominating set of vertices in a regular graph is used for this construction in the following way; each vertex represents a participant and each minimum independent dominating subset represents a minimal qualified subset. Some relations between dominating set, graph order and regularity are achieved, and can be used to demonstrate the possibility of using dominating set to construct a secret sharing scheme. The information rate that is used as a measure for the efficiency of such systems is calculated to show that the proposed method has some improved values.Keywords: secret sharing scheme, dominating set, information rate, access structure, rank
Procedia PDF Downloads 39322261 Towards a Biologically Relevant Tumor-on-a-Chip: Multiplex Microfluidic Platform to Study Breast Cancer Drug Response
Authors: Soroosh Torabi, Brad Berron, Ren Xu, Christine Trinkle
Abstract:
Microfluidics integrated with 3D cell culture is a powerful technology to mimic cellular environment, and can be used to study cell activities such as proliferation, migration and response to drugs. This technology has gained more attention in cancer studies over the past years, and many organ-on-a-chip systems have been developed to study cancer cell behaviors in an ex-vivo tumor microenvironment. However, there are still some barriers to adoption which include low throughput, complexity in 3D cell culture integration and limitations on non-optical analysis of cells. In this study, a user-friendly microfluidic multi-well plate was developed to mimic the in vivo tumor microenvironment. The microfluidic platform feeds multiple 3D cell culture sites at the same time which enhances the throughput of the system. The platform uses hydrophobic Cassie-Baxter surfaces created by microchannels to enable convenient loading of hydrogel/cell suspensions into the device, while providing barrier free placement of the hydrogel and cells adjacent to the fluidic path. The microchannels support convective flow and diffusion of nutrients to the cells and a removable lid is used to enable further chemical and physiological analysis on the cells. Different breast cancer cell lines were cultured in the device and then monitored to characterize nutrient delivery to the cells as well as cell invasion and proliferation. In addition, the drug response of breast cancer cell lines cultured in the device was compared to the response in xenograft models to the same drugs to analyze relevance of this platform for use in future drug-response studies.Keywords: microfluidics, multi-well 3d cell culture, tumor microenvironment, tumor-on-a-chip
Procedia PDF Downloads 264