Search results for: factor models
8284 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 3398283 Associated Factors of Hypertension, Hypercholesterolemia and Double Burden Hypertension-Hypercholesterolemia in Patients With Congestive Heart Failure: Hospital Based Study
Authors: Pierre Mintom, William Djeukeu Asongni, Michelle Moni, William Dakam, Christine Fernande Nyangono Biyegue.
Abstract:
Background: In order to prevent congestive heart failure, control of hypertension and hypercholesterolemia is necessary because those risk factors frequently occur in combination. Objective: The aim of the study is to determine the prevalence and risk factors of hypertension, hypercholesterolemia and double burden HTA-Hypercholesterolemia in patients with congestive heart failure. Methodology: A database of 98 patients suffering from congestive heart failure was used. The latter were recruited from August 15, 2017, to March 5, 2018, in the Cardiology department of Deido District Hospital of Douala. This database provides information on sociodemographic parameters, biochemical examinations, characteristics of heart failure and food consumption. ESC/ESH and NCEP-ATPIII definitions were used to define Hypercholesterolemia (total cholesterol ≥200mg/dl), Hypertension (SBP≥140mmHg and/or DBP≥90mmHg). Double burden hypertension-hypercholesterolemia was defined as follows: total cholesterol (CT)≥200mg/dl, SBP≥140mmHg and DBP≥90mmHg. Results: The prevalence of hypertension (HTA), hypercholesterolemia (hyperchol) and double burden HTA-Hyperchol were 61.2%, 66.3% and 45.9%, respectively. No sociodemographic factor was associated with hypertension, hypercholesterolemia and double burden, but Male gender was significantly associated (p<0.05) with hypercholesterolemia. HypoHDLemia significantly increased hypercholesterolemia and the double burden by 19.664 times (p=0.001) and 14.968 times (p=0.021), respectively. Regarding dietary habits, the consumption of rice, peanuts and derivatives and cottonseed oil respectively significantly (p<0.05) exposed to the occurrence of hypertension. The consumption of tomatoes, green bananas, corn and derivatives, peanuts and derivatives and cottonseed oil significantly exposed (p<0.05) to the occurrence of hypercholesterolemia. The consumption of palm oil and cottonseed oil exposed the occurrence of the double burden of hypertension-hypercholesterolemia. Consumption of eggs protects against hypercholesterolemia, and consumption of peanuts and tomatoes protects against the double burden. Conclusion: hypercholesterolemia associated with hypertension appears as a complicating factor of congestive heart failure. Key risk factors are mainly diet-based, suggesting the importance of nutritional education for patients. New management protocols emphasizing diet should be considered.Keywords: risk factors, hypertension, hypercholesterolemia, congestive heart failure
Procedia PDF Downloads 688282 Cognitive Performance and Everyday Functionality in Healthy Greek Seniors
Authors: George Pavlidis, Ana Vivas
Abstract:
The demographic change into an aging population has stimulated the examination of seniors’ mental health and ability to live independently. The corresponding literature depicts the relation between cognitive decline and everyday functionality with aging, focusing largely in individuals that are reaching or have bridged the threshold of various forms of neuropathology and disability. In this context, recent meta-analysis depicts a moderate relation between cognitive performance and everyday functionality in AD sufferers. However, there has not been an analogous effort for the examination of this relation in the healthy spectrum of aging (i.e, in samples that are not challenged from a neurodegenerative disease). There is a consensus that the assessment tools designed to detect neuropathology with those that assess cognitive performance in healthy adults are distinct, thus their universal use in cognitively challenged and in healthy adults is not always valid. The same accounts for the assessment of everyday functionality. In addition, it is argued that everyday functionality should be examined with cultural adjusted assessment tools, since many vital everyday tasks are heterotypical among distinct cultures. Therefore, this study was set out to examine the relation between cognitive performance and everyday functionality a) in the healthy spectrum of aging and b) by adjusting the everyday functionality tools EPT and OTDL-R in the Greek cultural context. In Greece, 107 cognitively healthy seniors ( Mage = 62.24) completed a battery of neuropsychological tests and everyday functionality tests. Both were carefully chosen to be sensitive in fluctuations of performance in the healthy spectrum of cognitive performance and everyday functionality. The everyday functionality assessment tools were modified to reflect the local cultural context (i.e., EPT-G and OTDL-G). The results depicted that performance in all everyday functionality measures decline with age (.197 < r > .509). Statistically significant correlations emerged between cognitive performance and everyday functionality assessments that range from r =0.202 to r=0.510. A series of independent regression analysis including the scores of cognitive assessments has yield statistical significant models that explained 20.9 < AR2 > 32.4 of the variance in everyday functionality scored indexes. All everyday functionality measures were independently predicted by the TMT B-A index, and indicator of executive function. Stepwise regression analyses depicted that TMT B-A and age were statistically significant independent predictors of EPT-G and OTDL-G. It was concluded that everyday functionality is declining with age and that cognitive performance and everyday functional may be related in the healthy spectrum of aging. Age seems not to be the sole contributing factor in everyday functionality decline, rather executive control as well. Moreover, it was concluded that the EPT-G and OTDL-G are valuable tools to assess everyday functionality in Greek seniors that are not cognitively challenged, especially for research purposes. Future research should examine the contributing factors of a better cognitive vitality especially in executive control, as vital for the maintenance of independent living capacity with aging.Keywords: cognition, everyday functionality, aging, cognitive decline, healthy aging, Greece
Procedia PDF Downloads 5238281 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay
Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer
Abstract:
Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM
Procedia PDF Downloads 828280 Interactions between Residential Mobility, Car Ownership and Commute Mode: The Case for Melbourne
Authors: Solmaz Jahed Shiran, John Hearne, Tayebeh Saghapour
Abstract:
Daily travel behavior is strongly influenced by the location of the places of residence, education, and employment. Hence a change in those locations due to a move or changes in an occupation leads to a change in travel behavior. Given the interventions of housing mobility and travel behaviors, the hypothesis is that a mobile housing market allows households to move as a result of any change in their life course, allowing them to be closer to central services, public transport facilities and workplace and hence reducing the time spent by individuals on daily travel. Conversely, household’s immobility may lead to longer commutes of residents, for example, after a change of a job or a need for new services such as schools for children who have reached their school age. This paper aims to investigate the association between residential mobility and travel behavior. The Victorian Integrated Survey of Travel and Activity (VISTA) data is used for the empirical analysis. Car ownership and journey to work time and distance of employed people are used as indicators of travel behavior. Change of usual residence within the last five years used to identify movers and non-movers. Statistical analysis, including regression models, is used to compare the travel behavior of movers and non-movers. The results show travel time, and the distance does not differ for movers and non-movers. However, this is not the case when taking into account the residence tenure-type. In addition, car ownership rate and number found to be significantly higher for non-movers. It is hoped that the results from this study will contribute to a better understanding of factors other than common socioeconomic and built environment features influencing travel behavior.Keywords: journey to work, regression models, residential mobility, commute mode, car ownership
Procedia PDF Downloads 1338279 Architectural Visualization: From Ancient Civilizations to the Roman Empire
Authors: Matthias Stange
Abstract:
Architectural visualization has been practiced for as long as there have been buildings. Visualization (lat.: visibilis "visible") generally refers to bringing abstract data and relationships into a graphically, visually comprehensible form. Particularly, visualization refers to the process of translating relationships that are difficult to formulate linguistically or logically into visual media (e.g., drawings or models) to make them comprehensible. Building owners have always been interested in knowing how their building will look before it is built. In the empirical part of this study, the roots of architectural visualization are examined, starting from the ancient civilizations to the end of the Roman Empire. Extensive literature research on architectural theory and architectural history forms the basis for this analysis. The focus of the analysis is basic research from the emergence of the first two-dimensional drawings in the Neolithic period to the triggers of significant further developments of architectural representation, as well as their importance for subsequent methods and the transmission of knowledge over the following epochs. The analysis focuses on the development of analog methods of representation from the first Neolithic house floor plans to the Greek detailed stone models and paper drawings in the Roman Empire. In particular, the question of socio-cultural, socio-political, and economic changes as possible triggers for the development of representational media and methods will be analyzed. The study has shown that the development of visual building representation has been driven by scientific, technological, and social developments since the emergence of the first civilizations more than 6000 years ago first by the change in human’s subsistence strategy, from food appropriation by hunting and gathering to food production by agriculture and livestock, and the sedentary lifestyle required for this.Keywords: ancient Greece, ancient orient, Roman Empire, architectural visualization
Procedia PDF Downloads 1168278 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility
Authors: Dicko Ali Hamadi, Tong-Yette Nicolas, Gilles Benjamin, Faure Francois, Palombi Olivier
Abstract:
A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.Keywords: hybrid, modeling, fast simulation, lumbar spine
Procedia PDF Downloads 3068277 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions
Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak
Abstract:
In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].tKeywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics
Procedia PDF Downloads 3088276 Analysis of Capillarity Phenomenon Models in Primary and Secondary Education in Spain: A Case Study on the Design, Implementation, and Analysis of an Inquiry-Based Teaching Sequence
Authors: E. Cascarosa-Salillas, J. Pozuelo-Muñoz, C. Rodríguez-Casals, A. de Echave
Abstract:
This study focuses on improving the understanding of the capillarity phenomenon among Primary and Secondary Education students. Despite being a common concept in daily life and covered in various subjects, students’ comprehension remains limited. This work explores inquiry-based teaching methods to build a conceptual foundation of capillarity by examining the forces involved. The study adopts an inquiry-based teaching approach supported by research emphasizing the importance of modeling in science education. Scientific modeling aids students in applying knowledge across varied contexts and developing systemic thinking, allowing them to construct scientific models applicable to everyday situations. This methodology fosters the development of scientific competencies such as observation, hypothesis formulation, and communication. The research was structured as a case study with activities designed for Spanish Primary and Secondary Education students aged 9 to 13. The process included curriculum analysis, the design of an activity sequence, and its implementation in classrooms. Implementation began with questions that students needed to resolve using available materials, encouraging observation, experimentation, and the re-contextualization of activities to everyday phenomena where capillarity is observed. Data collection tools included audio and video recordings of the sessions, which were transcribed and analyzed alongside the students' written work. Students' drawings on capillarity were also collected and categorized. Qualitative analyses of the activities showed that, through inquiry, students managed to construct various models of capillarity, reflecting an improved understanding of the phenomenon. Initial activities allowed students to express prior ideas and formulate hypotheses, which were then refined and expanded in subsequent sessions. The generalization and use of graphical representations of their ideas on capillarity, analyzed alongside their written work, enabled the categorization of capillarity models: Intuitive Model: A visual and straightforward representation without explanations of how or why it occurs. Simple symbolic elements, such as arrows to indicate water rising, are used without detailed or causal understanding. It reflects an initial, immediate perception of the phenomenon, interpreted as something that happens "on its own" without delving into the microscopic level. Explanatory Intuitive Model: Students begin to incorporate causal explanations, though still limited and without complete scientific accuracy. They represent the role of materials and use basic terms such as ‘absorption’ or ‘attraction’ to describe the rise of water. This model shows a more complex understanding where the phenomenon is not only observed but also partially explained in terms of interaction, though without microscopic detail. School Scientific Model: This model reflects a more advanced and detailed understanding. Students represent the phenomenon using specific scientific concepts like ‘surface tension,’ cohesion,’ and ‘adhesion,’ including structured explanations connecting microscopic and macroscopic levels. At this level, students model the phenomenon as a coherent system, demonstrating how various forces or properties interact in the capillarity process, with representations on a microscopic level. The study demonstrated that the capillarity phenomenon can be effectively approached in class through the experimental observation of everyday phenomena, explained through guided inquiry learning. The methodology facilitated students’ construction of capillarity models and served to analyze an interaction phenomenon of different forces occurring at the microscopic level.Keywords: capillarity, inquiry-based learning, scientific modeling, primary and secondary education, conceptual understanding, Drawing analysis.
Procedia PDF Downloads 148275 Recovery and Identification of Phenolic Acids in Honey Samples from Different Floral Sources of Pakistan Having Antimicrobial Activity
Authors: Samiyah Tasleem, Muhammad Abdul Haq, Syed Baqir Shyum Naqvi, Muhammad Abid Husnain, Sajjad Haider Naqvi
Abstract:
The objective of the present study was: a) to investigate the antimicrobial activity of honey samples of different floral sources of Pakistan, b) to recover the phenolic acids in them as a possible contributing factor of antimicrobial activity. Six honey samples from different floral sources, namely: Trachysperm copticum, Acacia species, Helianthus annuus, Carissa opaca, Zizyphus and Magnifera indica were used. The antimicrobial activity was investigated by the disc diffusion method against eight freshly isolated clinical isolates (Staphylococci aureus, Staphylococci epidermidis, Streptococcus faecalis, Pseudomonas aeruginosa, Klebsiella pneumonia, Escherichia coli, Proteus vulgaris and Candida albicans). Antimicrobial activity of honey was compared with five commercial antibiotics, namely: doxycycline (DO-30ug/mL), oxytetracycline (OT-30ug/mL), clarithromycin (CLR–15ug/mL), moxifloxacin (MXF-5ug/mL) and nystatin (NT – 100 UT). The fractions responsible for antimicrobial activity were extracted using ethyl acetate. Solid phase extraction (SPE) was used to recover the phenolic acids of honey samples. Identification was carried out via High-Performance Liquid Chromatography (HPLC). The results indicated that antimicrobial activity was present in all honey samples and found comparable to the antibiotics used in the study. In the microbiological assay, the ethyl acetate honey extract was found to exhibit a very promising antimicrobial activity against all the microorganisms tested, indicating the existence of phenolic compounds. Six phenolic acids, namely: gallic, caffeic, ferulic, vanillic, benzoic and cinnamic acids were identified besides some unknown substance by HPLC. In conclusion, Pakistani honey samples showed a broad spectrum antibacterial and promising antifungal activity. Identification of six different phenolic acids showed that Pakistani honey samples are rich sources of phenolic compounds that could be the contributing factor of antimicrobial activity.Keywords: Pakistani honey, antimicrobial activity, Phenolic acids eg.gallic, caffeic, ferulic, vanillic, benzoic and cinnamic acids
Procedia PDF Downloads 5498274 Kýklos Dimensional Geometry: Entity Specific Core Measurement System
Authors: Steven D. P Moore
Abstract:
A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).Keywords: Kyklos, geometry, measurement, celestial, dimension
Procedia PDF Downloads 1668273 Ethnic Identity as an Asset: Linking Ethnic Identity, Perceived Social Support, and Mental Health among Indigenous Adults in Taiwan
Authors: A.H.Y. Lai, C. Teyra
Abstract:
In Taiwan, there are 16 official indigenous groups, accounting for 2.3% of the total population. Like other indigenous populations worldwide, indigenous peoples in Taiwan have poorer mental health because of their history of oppression and colonisation. Amid the negative narratives, the ethnic identity of cultural minorities is their unique psychological and cultural asset. Moreover, positive socialisation is found to be related to strong ethnic identity. Based on Phinney’s theory on ethnic identity development and social support theory, this study adopted a strength-based approach conceptualising ethnic identity as the central organising principle that linked perceived social support and mental health among indigenous adults in Taiwan. Aims. Overall aim is to examine the effect of ethnic identity and social support on mental health. Specific aims were to examine : (1) the association between ethnic identity and mental health; (2) the association between perceived social support and mental health ; (3) the indirect effect of ethnic identity linking perceived social support and mental health. Methods. Participants were indigenous adults in Taiwan (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Respondent-driven sampling was used. Standardised measurements were: Ethnic Identity Scale(6-item); Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender and economic satisfaction. A four-stage structural equation modelling (SEM) with robust maximin likelihood estimation was employed using Mplus8.0. Step 1: A measurement model was built and tested using confirmatory factor analysis (CFA). Step 2: Factor covariates were re-specified as direct effects in the SEM. Covariates were added. The direct effects of (1) ethnic identity and social support on depression and anxiety and (2) social support on ethnic identity were tested. The indirect effect of ethnic identity was examined with the bootstrapping technique. Results. The CFA model showed satisfactory fit statistics: x^2(df)=869.69(608), p<.05; Comparative ft index (CFI)/ Tucker-Lewis fit index (TLI)=0.95/0.94; root mean square error of approximation (RMSEA)=0.05; Standardized Root Mean Squared Residual (SRMR)=0.05. Ethnic identity is represented by two latent factors: ethnic identity-commitment and ethnic identity-exploration. Depression, anxiety and social support are single-factor latent variables. For the SEM, model fit statistics were: x^2(df)=779.26(527), p<.05; CFI/TLI=0.94/0.93; RMSEA=0.05; SRMR=0.05. Ethnic identity-commitment (b=-0.30) and social support (b=-0.33) had direct negative effects on depression, but ethnic identity-exploration did not. Ethnic identity-commitment (b=-0.43) and social support (b=-0.31) had direct negative effects on anxiety, while identity-exploration (b=0.24) demonstrated a positive effect. Social support had direct positive effects on ethnic identity-exploration (b=0.26) and ethnic identity-commitment (b=0.31). Mediation analysis demonstrated the indirect effect of ethnic identity-commitment linking social support and depression (b=0.22). Implications: Results underscore the role of social support in preventing depression via ethnic identity commitment among indigenous adults in Taiwan. Adopting the strength-based approach, mental health practitioners can mobilise indigenous peoples’ commitment to their group to promote their well-being.Keywords: ethnic identity, indigenous population, mental health, perceived social support
Procedia PDF Downloads 1038272 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1288271 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 928270 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales
Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias
Abstract:
Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline
Procedia PDF Downloads 808269 Satellite Images to Determine Levels of Fire Severity in a Native Chilean Forest: Assessing the Responses of Soil Mesofauna Diversity to a Fire Event
Authors: Carolina Morales, Ricardo Castro-Huerta, Enrique A. Mundaca
Abstract:
The edaphic fauna is the main factor involved in the transformation of nutrients and soil decomposition processes. Edaphic organisms are highly sensitive to soil disturbances, which normally causes changes in the composition and abundance of such organisms. Fire is known to be a disturbing factor since it affects the physical, chemical and biological properties of the soil and the whole ecosystem. During the summer (December-March) of 2017, Chile suffered the major fire events recorded in its modern history, which affected a vast area and a number of ecosystem types. The objective of this study was first to use remote sensing satellite images and GIS (Geographic Information Systems) to assess and identify levels of fire severity in disturbed areas and to compare the responses of the soil mesofauna diversity among such areas. We identified four areas (treatments) with an ascending level of severity, namely: mild, medium, high severity, and free of fire. A non-affected patch of forest was established as a control. Three samples from each treatment were collected in the form of a soil cube (10x10x10 cm). Edaphic mesofauna was obtained from each sample through the Berlese-Tullgren funnel method. Collected specimens were quantified and identified, using the RTU (Recognisable Taxonomic Unit) criterion. Diversity was analysed using inferential statistics to compare Simpson and Shannon-Wiener indexes across treatments. As predicted, the unburned forest patch (control) exhibited higher diversity values than the treatments. Significantly higher diversity values were recorded in those treatments subjected to lower fire severity. We conclude that remote sensing zoning is an adequate tool to identify different levels of fire severity and that an edaphic mesofauna is a group of organisms that qualify as good bioindicators for monitoring soil recovery after fire events.Keywords: bioindicator, Chile, fire severity level, soil
Procedia PDF Downloads 1608268 Assessment of Climate Change Impact on Meteorological Droughts
Authors: Alireza Nikbakht Shahbazi
Abstract:
There are various factors that affect climate changes; drought is one of those factors. Investigation of efficient methods for estimating climate change impacts on drought should be assumed. The aim of this paper is to investigate climate change impacts on drought in Karoon3 watershed located south-western Iran in the future periods. The atmospheric general circulation models (GCM) data under Intergovernmental Panel on Climate Change (IPCC) scenarios should be used for this purpose. In this study, watershed drought under climate change impacts will be simulated in future periods (2011 to 2099). Standard precipitation index (SPI) as a drought index was selected and calculated using mean monthly precipitation data in Karoon3 watershed. SPI was calculated in 6, 12 and 24 months periods. Statistical analysis on daily precipitation and minimum and maximum daily temperature was performed. LRAS-WG5 was used to determine the feasibility of future period's meteorological data production. Model calibration and verification was performed for the base year (1980-2007). Meteorological data simulation for future periods under General Circulation Models and climate change IPCC scenarios was performed and then the drought status using SPI under climate change effects analyzed. Results showed that differences between monthly maximum and minimum temperature will decrease under climate change and spring precipitation shall increase while summer and autumn rainfall shall decrease. The precipitation occurs mainly between January and May in future periods and summer or autumn precipitation decline and lead up to short term drought in the study region. Normal and wet SPI category is more frequent in B1 and A2 emissions scenarios than A1B.Keywords: climate change impact, drought severity, drought frequency, Karoon3 watershed
Procedia PDF Downloads 2408267 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation
Procedia PDF Downloads 1588266 Assessment of Soil Quality Indicators in Rice Soil of Tamil Nadu
Authors: Kaleeswari R. K., Seevagan L .
Abstract:
Soil quality in an agroecosystem is influenced by the cropping system, water and soil fertility management. A valid soil quality index would help to assess the soil and crop management practices for desired productivity and soil health. The soil quality indices also provide an early indication of soil degradation and needy remedial and rehabilitation measures. Imbalanced fertilization and inadequate organic carbon dynamics deteriorate soil quality in an intensive cropping system. The rice soil ecosystem is different from other arable systems since rice is grown under submergence, which requires a different set of key soil attributes for enhancing soil quality and productivity. Assessment of the soil quality index involves indicator selection, indicator scoring and comprehensive score into one index. The most appropriate indicator to evaluate soil quality can be selected by establishing the minimum data set, which can be screened by linear and multiple regression factor analysis and score function. This investigation was carried out in intensive rice cultivating regions (having >1.0 lakh hectares) of Tamil Nadu viz., Thanjavur, Thiruvarur, Nagapattinam, Villupuram, Thiruvannamalai, Cuddalore and Ramanathapuram districts. In each district, intensive rice growing block was identified. In each block, two sampling grids (10 x 10 sq.km) were used with a sampling depth of 10 – 15 cm. Using GIS coordinates, and soil sampling was carried out at various locations in the study area. The number of soil sampling points were 41, 28, 28, 32, 37, 29 and 29 in Thanjavur, Thiruvarur, Nagapattinam, Cuddalore, Villupuram, Thiruvannamalai and Ramanathapuram districts, respectively. Principal Component Analysis is a data reduction tool to select some of the potential indicators. Principal Component is a linear combination of different variables that represents the maximum variance of the dataset. Principal Component that has eigenvalues equal or higher than 1.0 was taken as the minimum data set. Principal Component Analysis was used to select the representative soil quality indicators in rice soils based on factor loading values and contribution percent values. Variables having significant differences within the production system were used for the preparation of the minimum data set. Each Principal Component explained a certain amount of variation (%) in the total dataset. This percentage provided the weight for variables. The final Principal Component Analysis based soil quality equation is SQI = ∑ i=1 (W ᵢ x S ᵢ); where S- score for the subscripted variable; W-weighing factor derived from PCA. Higher index scores meant better soil quality. Soil respiration, Soil available Nitrogen and Potentially Mineralizable Nitrogen were assessed as soil quality indicators in rice soil of the Cauvery Delta zone covering Thanjavur, Thiruvavur and Nagapattinam districts. Soil available phosphorus could be used as a soil quality indicator of rice soils in the Cuddalore district. In rain-fed rice ecosystems of coastal sandy soil, DTPA – Zn could be used as an effective soil quality indicator. Among the soil parameters selected from Principal Component Analysis, Microbial Biomass Nitrogen could be used quality indicator for rice soils of the Villupuram district. Cauvery Delta zone has better SQI as compared with other intensive rice growing zone of Tamil Nadu.Keywords: soil quality index, soil attributes, soil mapping, and rice soil
Procedia PDF Downloads 868265 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 2888264 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach
Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis
Abstract:
The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company
Procedia PDF Downloads 1138263 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 1688262 Advancing Trustworthy Human-robot Collaboration: Challenges and Opportunities in Diverse European Industrial Settings
Authors: Margarida Porfírio Tomás, Paula Pereira, José Manuel Palma Oliveira
Abstract:
The decline in employment rates across sectors like industry and construction is exacerbated by an aging workforce. This has far-reaching implications for the economy, including skills gaps, labour shortages, productivity challenges due to physical limitations, and workplace safety concerns. To sustain the workforce and pension systems, technology plays a pivotal role. Robots provide valuable support to human workers, and effective human-robot interaction is essential. FORTIS, a Horizon project, aims to address these challenges by creating a comprehensive Human-Robot Interaction (HRI) solution. This solution focuses on multi-modal communication and multi-aspect interaction, with a primary goal of maintaining a human-centric approach. By meeting the needs of both human workers and robots, FORTIS aims to facilitate efficient and safe collaboration. The project encompasses three key activities: 1) A Human-Centric Approach involving data collection, annotation, understanding human behavioural cognition, and contextual human-robot information exchange. 2) A Robotic-Centric Focus addressing the unique requirements of robots during the perception and evaluation of human behaviour. 3) Ensuring Human-Robot Trustworthiness through measures such as human-robot digital twins, safety protocols, and resource allocation. Factor Social, a project partner, will analyse psycho-physiological signals that influence human factors, particularly in hazardous working conditions. The analysis will be conducted using a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. However, the adoption of novel technologies, particularly those involving human-robot interaction, often faces hurdles related to acceptance. To address this challenge, FORTIS will draw upon insights from Social Sciences and Humanities (SSH), including risk perception and technology acceptance models. Throughout its lifecycle, FORTIS will uphold a human-centric approach, leveraging SSH methodologies to inform the design and development of solutions. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No 101135707 (FORTIS).Keywords: skills gaps, productivity challenges, workplace safety, human-robot interaction, human-centric approach, social sciences and humanities, risk perception
Procedia PDF Downloads 528261 Variable Refrigerant Flow (VRF) Zonal Load Prediction Using a Transfer Learning-Based Framework
Authors: Junyu Chen, Peng Xu
Abstract:
In the context of global efforts to enhance building energy efficiency, accurate thermal load forecasting is crucial for both device sizing and predictive control. Variable Refrigerant Flow (VRF) systems are widely used in buildings around the world, yet VRF zonal load prediction has received limited attention. Due to differences between VRF zones in building-level prediction methods, zone-level load forecasting could significantly enhance accuracy. Given that modern VRF systems generate high-quality data, this paper introduces transfer learning to leverage this data and further improve prediction performance. This framework also addresses the challenge of predicting load for building zones with no historical data, offering greater accuracy and usability compared to pure white-box models. The study first establishes an initial variable set of VRF zonal building loads and generates a foundational white-box database using EnergyPlus. Key variables for VRF zonal loads are identified using methods including SRRC, PRCC, and Random Forest. XGBoost and LSTM are employed to generate pre-trained black-box models based on the white-box database. Finally, real-world data is incorporated into the pre-trained model using transfer learning to enhance its performance in operational buildings. In this paper, zone-level load prediction was integrated with transfer learning, and a framework was proposed to improve the accuracy and applicability of VRF zonal load prediction.Keywords: zonal load prediction, variable refrigerant flow (VRF) system, transfer learning, energyplus
Procedia PDF Downloads 288260 Regret-Regression for Multi-Armed Bandit Problem
Authors: Deyadeen Ali Alshibani
Abstract:
In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.Keywords: optimal, bandit problem, optimization, dynamic programming
Procedia PDF Downloads 4538259 Fear-of-Failure and Woman Entrepreneurship: Comparative Analysis Austria Versus USA
Authors: Magdalena Meusburger, Caroline Hofer
Abstract:
The advancement of woman entrepreneurship in the last decade has been a vital driver for social and economic development. Despite the positive evolution, women entrepreneurs are still underrepresented in entrepreneurial ecosystems. Fear-of-failure is a major factor affecting their entrepreneurial activity. This survey-based research focuses on aspiring and established entrepreneurial women in Austria and in the USA. It explores and compares the extent to which fear-of-failure influences their self-employment and their aspirations to become self-employed.Keywords: entrepreneurial ecosystems, fear-of-failure, female entrepreneurship, woman entrepreneurship
Procedia PDF Downloads 3648258 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 4988257 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic
Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo
Abstract:
Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate
Procedia PDF Downloads 4718256 Estimating Housing Prices Using Automatic Linear Modeling in the Metropolis of Mashhad, Iran
Authors: Mohammad Rahim Rahnama
Abstract:
Market-transaction price for housing is the main criteria for determining municipality taxes and is determined and announced on an annual basis. Of course, there is a discrepancy between the actual value of transactions in the Bureau of Finance (P for short) or municipality (P´ for short) and the real price on the market (P˝). The present research aims to determine the real price of housing in the metropolis of Mashhad and to pinpoint the price gap with those of the aforementioned apparatuses and identify the factors affecting it. In order to reach this practical objective, Automatic Linear Modeling, which calls for an explanatory research, was utilized. The population of the research consisted of all the residential units in Mashhad, from which 317 residential units were randomly selected. Through cluster sampling, out of the 170 income blocks defined by the municipality, three blocks form high-income (Kosar), middle-income (Elahieh), and low-income (Seyyedi) strata were surveyed using questionnaires during February and March of 2015 and the information regarding the price and specifications of residential units were gathered. In order to estimate the effect of various factors on the price, the relationship between independent variables (8 variables) and the dependent variable of the housing price was calculated using Automatic Linear Modeling in SPSS. The results revealed that the average for housing price index is 788$ per square meter, compared to the Bureau of Finance’s prices which is 10$ and that of municipality’s which is 378$. Correlation coefficient among dependent and independent variables was calculated to be R²=0.81. Out of the eight initial variables, three were omitted. The most influential factor affecting the housing prices is the quality of Quality of construction (Ordinary, Full, Luxury). The least important factor influencing the housing prices is the variable of number of sides. The price gap between low-income (Seyyedi) and middle-income (Elahieh) districts was not confirmed via One-Way ANOVA but their gap with the high-income district (Kosar) was confirmed. It is suggested that city be divided into two low-income and high-income sections, as opposed three, in terms of housing prices.Keywords: automatic linear modeling, housing prices, Mashhad, Iran
Procedia PDF Downloads 2558255 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 86