Search results for: predictive biomarker
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1181

Search results for: predictive biomarker

251 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 355
250 Computational Approaches to Study Lineage Plasticity in Human Pancreatic Ductal Adenocarcinoma

Authors: Almudena Espin Perez, Tyler Risom, Carl Pelz, Isabel English, Robert M. Angelo, Rosalie Sears, Andrew J. Gentles

Abstract:

Pancreatic ductal adenocarcinoma (PDAC) is one of the most deadly malignancies. The role of the tumor microenvironment (TME) is gaining significant attention in cancer research. Despite ongoing efforts, the nature of the interactions between tumors, immune cells, and stromal cells remains poorly understood. The cell-intrinsic properties that govern cell lineage plasticity in PDAC and extrinsic influences of immune populations require technically challenging approaches due to the inherently heterogeneous nature of PDAC. Understanding the cell lineage plasticity of PDAC will improve the development of novel strategies that could be translated to the clinic. Members of the team have demonstrated that the acquisition of ductal to neuroendocrine lineage plasticity in PDAC confers therapeutic resistance and is a biomarker of poor outcomes in patients. Our approach combines computational methods for deconvolving bulk transcriptomic cancer data using CIBERSORTx and high-throughput single-cell imaging using Multiplexed Ion Beam Imaging (MIBI) to study lineage plasticity in PDAC and its relationship to the infiltrating immune system. The CIBERSORTx algorithm uses signature matrices from immune cells and stroma from sorted and single-cell data in order to 1) infer the fractions of different immune cell types and stromal cells in bulked gene expression data and 2) impute a representative transcriptome profile for each cell type. We studied a unique set of 300 genomically well-characterized primary PDAC samples with rich clinical annotation. We deconvolved the PDAC transcriptome profiles using CIBERSORTx, leveraging publicly available single-cell RNA-seq data from normal pancreatic tissue and PDAC to estimate cell type proportions in PDAC, and digitally reconstruct cell-specific transcriptional profiles from our study dataset. We built signature matrices and optimized by simulations and comparison to ground truth data. We identified cell-type-specific transcriptional programs that contribute to cancer cell lineage plasticity, especially in the ductal compartment. We also studied cell differentiation hierarchies using CytoTRACE and predict cell lineage trajectories for acinar and ductal cells that we believe are pinpointing relevant information on PDAC progression. Collaborators (Angelo lab, Stanford University) has led the development of the Multiplexed Ion Beam Imaging (MIBI) platform for spatial proteomics. We will use in the very near future MIBI from tissue microarray of 40 PDAC samples to understand the spatial relationship between cancer cell lineage plasticity and stromal cells focused on infiltrating immune cells, using the relevant markers of PDAC plasticity identified from the RNA-seq analysis.

Keywords: deconvolution, imaging, microenvironment, PDAC

Procedia PDF Downloads 98
249 Application of Flory Paterson’s Theory on the Volumetric Properties of Liquid Mixtures: 1,2-Dichloroethane with Aliphatic and Cyclic Ethers

Authors: Linda Boussaid, Farid Brahim Belaribi

Abstract:

The physico-chemical properties of liquid materials in the industrial field, in general, and in that of the chemical industries, in particular, constitutes a prerequisite for the design of equipment, for the resolution of specific problems (related to the techniques of purification and separation, at risk in the transport of certain materials, etc.) and, therefore, at the production stage. Chloroalkanes, ethers constitute three chemical families having an industrial, theoretical and environmental interest. For example, these compounds are used in various applications in the chemical and pharmaceutical industries. In addition, they contribute to the particular thermodynamic behavior (deviation from ideality, association, etc.) of certain mixtures which constitute a severe test for predictive theoretical models. Finally, due to the degradation of the environment in the world, a renewed interest is observed for ethers, because some of their physicochemical properties could contribute to lower pollution (ethers would be used as additives in aqueous fuels.). This work is a thermodynamic, experimental and theoretical study of the volumetric properties of liquid binary systems formed from compounds belonging to the chemical families of chloroalkanes, ethers, having an industrial, theoretical and environmental interest. Experimental determination of the densities and excess volumes of the systems studied, at different temperatures in the interval [278.15-333.15] K and at atmospheric pressure, using an AntonPaar vibrating tube densitometer of the DMA5000 type. This contribution of experimental data, on the volumetric properties of the binary liquid mixtures of 1,2-dichloroethane with an ether, supplemented by an application of the theoretical model of Prigogine-Flory-Patterson PFP, will probably contribute to the enrichment of the thermodynamic database and the further development of the theory of Flory in its Prigogine-Flory-Patterson (PFP) version, for a better understanding of the thermodynamic behavior of these liquid binary mixtures

Keywords: prigogine-flory-patterson (pfp), propriétés volumétrique , volume d’excés, ethers

Procedia PDF Downloads 71
248 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 157
247 Systematic Review of Associations between Interoception, Vagal Tone, and Emotional Regulation

Authors: Darren Edwards, Thomas Pinna

Abstract:

Background: Interoception and heart rate variability have been found to predict outcomes of mental health and well-being. However, these have usually been investigated independently of one another. Objectives: This review aimed to explore the associations between interoception and heart rate variability (HRV) with emotion regulation (ER) and ER strategies within the existing literature and utilizing systematic review methodology. Methods: The process of article retrieval and selection followed the preferred reporting items for systematic review and meta-analyses (PRISMA) guidelines. Databases PsychINFO, Web of Science, PubMed, CINAHL, and MEDLINE were scanned for papers published. Preliminary inclusion and exclusion criteria were specified following the patient, intervention, comparison, and outcome (PICO) framework, whilst the checklist for critical appraisal and data extraction for systematic reviews of prediction modeling studies (CHARMS) framework was used to help formulate the research question, and to critically assess for bias in the identified full-length articles. Results: 237 studies were identified after initial database searches. Of these, eight studies were included in the final selection. Six studies explored the associations between HRV and ER, whilst three investigated the associations between interoception and ER (one of which was included in the HRV selection too). Overall, the results seem to show that greater HRV and interoception are associated with better ER. Specifically, high parasympathetic activity largely predicted the use of adaptive ER strategies such as reappraisal, and better acceptance of emotions. High interoception, instead, was predictive of effective down-regulation of negative emotions and handling of social uncertainty, there was no association with any specific ER strategy. Conclusions: Awareness of one’s own bodily feelings and vagal activation seem to be of central importance for the effective regulation of emotional responses.

Keywords: emotional regulation, vagal tone, interoception, chronic conditions, health and well-being, psychological flexibility

Procedia PDF Downloads 91
246 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 16
245 Factors Associated with Acute Kidney Injury in Multiple Trauma Patients with Rhabdomyolysis

Authors: Yong Hwang, Kang Yeol Suh, Yundeok Jang, Tae Hoon Kim

Abstract:

Introduction: Rhabdomyolysis is a syndrome characterized by muscle necrosis and the release of intracellular muscle constituents into the circulation. Acute kidney injury is a potential complication of severe rhabdomyolysis and the prognosis is substantially worse if renal failure develops. We try to identify the factors that were predictive of AKI in severe trauma patients with rhabdomyolysis. Methods: This retrospective study was conducted at the emergency department of a level Ⅰ trauma center. Patients enrolled that initial creatine phosphokinase (CPK) levels were higher than 1000 IU with acute multiple trauma, and more than 18 years older from Oct. 2012 to June 2016. We collected demographic data (age, gender, length of hospital day, and patients’ outcome), laboratory data (ABGA, lactate, hemoglobin. hematocrit, platelet, LDH, myoglobin, liver enzyme, and BUN/Cr), and clinical data (Injury Mechanism, RTS, ISS, AIS, and TRISS). The data were compared and analyzed between AKI and Non-AKI group. Statistical analyses were performed using IMB SPSS 20.0 statistics for Window. Results: Three hundred sixty-four patients were enrolled that AKI group were ninety-six and non-AKI group were two hundred sixty-eight. The base excess (HCO3), AST/ALT, LDH, and myoglobin in AKI group were significantly higher than non-AKI group from laboratory data (p ≤ 0.05). The injury severity score (ISS), revised Trauma Score (RTS), Abbreviated Injury Scale 3 and 4 (AIS 3 and 4) were showed significant results in clinical data. The patterns of CPK level were increased from first and second day, but slightly decreased from third day in both group. Seven patients had received hemodialysis treatment despite the bleeding risk and were survived in AKI group. Conclusion: We recommend that HCO3, CPK, LDH, and myoglobin should be checked and be concerned about ISS, RTS, AIS with injury mechanism at the early stage of treatment in the emergency department.

Keywords: acute kidney injury, emergencies, multiple trauma, rhabdomyolysis

Procedia PDF Downloads 311
244 Optimization of Ultrasound Assisted Extraction of Polysaccharides from Plant Waste Materials: Selected Model Material is Hazelnut Skin

Authors: T. Yılmaz, Ş. Tavman

Abstract:

In this study, optimization of ultrasound assisted extraction (UAE) of hemicellulose based polysaccharides from plant waste material has been studied. Selected material is hazelnut skin. Extraction variables for the operation are extraction time, amplitude and application temperature. Optimum conditions have been evaluated depending on responses such as amount of wet crude polysaccharide, total carbohydrate content and dried sample. Pretreated hazelnut skin powders were used for the experiments. 10 grams of samples were suspended in 100 ml water in a jacketed vessel with additional magnetic stirring. Mixture was sonicated by immersing ultrasonic probe processor. After the extraction procedures, ethanol soluble and insoluble sides were separated for further examinations. The obtained experimental data were analyzed by analysis of variance (ANOVA). Second order polynomial models were developed using multiple regression analysis. The individual and interactive effects of applied variables were evaluated by Box Behnken Design. The models developed from the experimental design were predictive and good fit with the experimental data with high correlation coefficient value (R2 more than 0.95). Extracted polysaccharides from hazelnut skin are assumed to be pectic polysaccharides according to the literature survey of Fourier Transform Spectrometry (FTIR) analysis results. No more change can be observed between spectrums of different sonication times. Application of UAE at optimized condition has an important effect on extraction of hemicellulose from plant material by satisfying partial hydrolysis to break the bounds with other components in plant cell wall material. This effect can be summarized by varied intensity of microjets and microstreaming at varied sonication conditions.

Keywords: hazelnut skin, optimization, polysaccharide, ultrasound assisted extraction

Procedia PDF Downloads 309
243 Simulation of GAG-Analogue Biomimetics for Intervertebral Disc Repair

Authors: Dafna Knani, Sarit S. Sivan

Abstract:

Aggrecan, one of the main components of the intervertebral disc (IVD), belongs to the family of proteoglycans (PGs) that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein. Its primary function is to maintain tissue hydration and hence disc height under the high loads imposed by muscle activity and body weight. Significant PG loss is one of the first indications of disc degeneration. A possible solution to recover disc functions is by injecting a synthetic hydrogel into the joint cavity, hence mimicking the role of PGs. One of the hydrogels proposed is GAG-analogues, based on sulfate-containing polymers, which are responsible for hydration in disc tissue. In the present work, we used molecular dynamics (MD) to study the effect of the hydrogel crosslinking (type and degree) on the swelling behavior of the suggested GAG-analogue biomimetics by calculation of cohesive energy density (CED), solubility parameter, enthalpy of mixing (ΔEmix) and the interactions between the molecules at the pure form and as a mixture with water. The simulation results showed that hydrophobicity plays an important role in the swelling of the hydrogel, as indicated by the linear correlation observed between solubility parameter values of the copolymers and crosslinker weight ratio (w/w); this correlation was found useful in predicting the amount of PEGDA needed for the desirable hydration behavior of (CS)₄-peptide. Enthalpy of mixing calculations showed that all the GAG analogs, (CS)₄ and (CS)₄-peptide are water-soluble; radial distribution function analysis revealed that they form interactions with water molecules, which is important for the hydration process. To conclude, our simulation results, beyond supporting the experimental data, can be used as a useful predictive tool in the future development of biomaterials, such as disc replacement.

Keywords: molecular dynamics, proteoglycans, enthalpy of mixing, swelling

Procedia PDF Downloads 49
242 The Utility of Sonographic Features of Lymph Nodes during EBUS-TBNA for Predicting Malignancy

Authors: Atefeh Abedini, Fatemeh Razavi, Mihan Pourabdollah Toutkaboni, Hossein Mehravaran, Arda Kiani

Abstract:

In countries with the highest prevalence of tuberculosis, such as Iran, the differentiation of malignant tumors from non-malignant is very important. In this study, which was conducted for the first time among the Iranian population, the utility of the ultrasonographic morphological characteristics in patients undergoing EBUS was used to distinguish the non-malignant versus malignant lymph nodes. The morphological characteristics of lymph nodes, which consist of size, shape, vascular pattern, echogenicity, margin, coagulation necrosis sign, calcification, and central hilar structure, were obtained during Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration and were compared with the final pathology results. During this study period, a total of 253 lymph nodes were evaluated in 93 cases. Round shape, non-hilar vascular pattern, heterogeneous echogenicity, hyperechogenicity, distinct margin, and the presence of necrosis sign were significantly higher in malignant nodes. On the other hand, the presence of calcification and also central hilar structure were significantly higher in the benign nodes (p-value ˂ 0.05). Multivariate logistic regression showed that size>1 cm, heterogeneous echogenicity, hyperechogenicity, the presence of necrosis signs and, the absence of central hilar structure are independent predictive factors for malignancy. The accuracy of each of the aforementioned factors is 42.29 %, 71.54 %, 71.90 %, 73.51 %, and 65.61 %, respectively. Of 74 malignant lymph nodes, 100% had at least one of these independent factors. According to our results, the morphological characteristics of lymph nodes based on Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration can play a role in the prediction of malignancy.

Keywords: EBUS-TBNA, malignancy, nodal characteristics, pathology

Procedia PDF Downloads 113
241 Nutritional Profile and Food Intake Trends amongst Hospital Dieted Diabetic Eye Disease Patients of India

Authors: Parmeet Kaur, Nighat Yaseen Sofi, Shakti Kumar Gupta, Veena Pandey, Rajvaedhan Azad

Abstract:

Nutritional status and prevailing blood glucose level trends amongst hospitalized patients has been linked to clinical outcome. Therefore, the present study was undertaken to assess hospitalized Diabetic Eye Disease (DED) patients' anthropometric and dietary intake trends. DED patients with type 1 or 2 diabetes > 20 years were enrolled. Actual food intake was determined by weighed food record method. Mifflin St Joer predictive equation multiplied by a combined stress and activity factor of 1.3 was applied to estimate caloric needs. A questionnaire was further administered to obtain reasons of inadequate dietary intake. Results indicated validity of joint analyses of body mass index in combination with waist circumference for clinical risk prediction. Dietary data showed a significant difference (p < 0.0005) between average daily caloric and carbohydrate intake and actual daily caloric and carbohydrate needs. Mean fasting and post-prandial plasma glucose levels were 150.71 ± 72.200 mg/dL and 219.76 ± 97.365 mg/dL, respectively. Improvement in food delivery systems and nutrition educations were indicated for reducing plate waste and to enable better understanding of dietary aspects of diabetes management. A team approach of nurses, physicians and other health care providers is required besides the expertise of dietetics professional. To conclude, findings of the present study will be useful in planning nutritional care process (NCP) for optimizing glucose control as a component of quality medical nutrition therapy (MNT) in hospitalized DED patients.

Keywords: nutritional status, diabetic eye disease, nutrition care process, medical nutrition therapy

Procedia PDF Downloads 334
240 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 135
239 Derivatives Balance Method for Linear and Nonlinear Control Systems

Authors: Musaab Mohammed Ahmed Ali, Vladimir Vodichev

Abstract:

work deals with an universal control technique or single controller for linear and nonlinear stabilization and tracing control systems. These systems may be structured as SISO and MIMO. Parameters of controlled plants can vary over a wide range. Introduced a novel control systems design method, construction of stable platform orbits using derivative balance, solved transfer function stability preservation problem of linear system under partial substitution of a rational function. Universal controller is proposed as a polar system with the multiple orbits to simplify design procedure, where each orbit represent single order of controller transfer function. Designed controller consist of proportional, integral, derivative terms and multiple feedback and feedforward loops. The controller parameters synthesis method is presented. In generally, controller parameters depend on new polynomial equation where all parameters have a relationship with each other and have fixed values without requirements of retuning. The simulation results show that the proposed universal controller can stabilize infinity number of linear and nonlinear plants and shaping desired previously ordered performance. It has been proven that sensor errors and poor performance will be completely compensated and cannot affect system performance. Disturbances and noises effect on the controller loop will be fully rejected. Technical and economic effect of using proposed controller has been investigated and compared to adaptive, predictive, and robust controllers. The economic analysis shows the advantage of single controller with fixed parameters to drive infinity numbers of plants compared to above mentioned control techniques.

Keywords: derivative balance, fixed parameters, stable platform, universal control

Procedia PDF Downloads 113
238 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI

Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist

Abstract:

Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.

Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma

Procedia PDF Downloads 123
237 Linking Enhanced Resting-State Brain Connectivity with the Benefit of Desirable Difficulty to Motor Learning: A Functional Magnetic Resonance Imaging Study

Authors: Chien-Ho Lin, Ho-Ching Yang, Barbara Knowlton, Shin-Leh Huang, Ming-Chang Chiang

Abstract:

Practicing motor tasks arranged in an interleaved order (interleaved practice, or IP) generally leads to better learning than practicing tasks in a repetitive order (repetitive practice, or RP), an example of how desirable difficulty during practice benefits learning. Greater difficulty during practice, e.g. IP, is associated with greater brain activity measured by higher blood-oxygen-level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) in the sensorimotor areas of the brain. In this study resting-state fMRI was applied to investigate whether increase in resting-state brain connectivity immediately after practice predicts the benefit of desirable difficulty to motor learning. 26 healthy adults (11M/15F, age = 23.3±1.3 years) practiced two sets of three sequences arranged in a repetitive or an interleaved order over 2 days, followed by a retention test on Day 5 to evaluate learning. On each practice day, fMRI data were acquired in a resting state after practice. The resting-state fMRI data was decomposed using a group-level spatial independent component analysis (ICA), yielding 9 independent components (IC) matched to the precuneus network, primary visual networks (two ICs, denoted by I and II respectively), sensorimotor networks (two ICs, denoted by I and II respectively), the right and the left frontoparietal networks, occipito-temporal network, and the frontal network. A weighted resting-state functional connectivity (wRSFC) was then defined to incorporate information from within- and between-network brain connectivity. The within-network functional connectivity between a voxel and an IC was gauged by a z-score derived from the Fisher transformation of the IC map. The between-network connectivity was derived from the cross-correlation of time courses across all possible pairs of ICs, leading to a symmetric nc x nc matrix of cross-correlation coefficients, denoted by C = (pᵢⱼ). Here pᵢⱼ is the extremum of cross-correlation between ICs i and j; nc = 9 is the number of ICs. This component-wise cross-correlation matrix C was then projected to the voxel space, with the weights for each voxel set to the z-score that represents the above within-network functional connectivity. The wRSFC map incorporates the global characteristics of brain networks measured by the between-network connectivity, and the spatial information contained in the IC maps measured by the within-network connectivity. Pearson correlation analysis revealed that greater IP-minus-RP difference in wRSFC was positively correlated with the RP-minus-IP difference in the response time on Day 5, particularly in brain regions crucial for motor learning, such as the right dorsolateral prefrontal cortex (DLPFC), and the right premotor and supplementary motor cortices. This indicates that enhanced resting brain connectivity during the early phase of memory consolidation is associated with enhanced learning following interleaved practice, and as such wRSFC could be applied as a biomarker that measures the beneficial effects of desirable difficulty on motor sequence learning.

Keywords: desirable difficulty, functional magnetic resonance imaging, independent component analysis, resting-state networks

Procedia PDF Downloads 181
236 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 278
235 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling

Authors: Dong Wu, Michael Grenn

Abstract:

Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.

Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction

Procedia PDF Downloads 53
234 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States

Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss

Abstract:

Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.

Keywords: Alzheimer’s disease, budget, dementia, diagnosis.

Procedia PDF Downloads 120
233 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 426
232 Domains of Socialization Interview: Development and Psychometric Properties

Authors: Dilek Saritas Atalar, Cansu Alsancak Akbulut, İrem Metin Orta, Feyza Yön, Zeynep Yenen, Joan Grusec

Abstract:

Objective: The aim of this study was to develop semi-structured Domains of Socialization Interview and its coding manual and to test their psychometric properties. Domains of Socialization Interview was designed to assess maternal awareness regarding effective parenting in five socialization domains (protection, mutual reciprocity, control, guided learning, and group participation) within the framework of the domains-of-socialization approach. Method: A series of two studies were conducted to develop and validate the interview and its coding manual. The pilot study, sampled 13 mothers of preschool-aged children, was conducted to develop the assessment tools and to test their function and clarity. Participants of the main study were 82 Turkish mothers (Xage = 34.25, SD = 3.53) who have children aged between 35-76 months (Xage = 50.75, SD = 11.24). Mothers filled in a questionnaire package including Coping with Children’s Negative Emotions Questionnaire, Social Competence and Behavior Evaluation-30, Child Rearing Questionnaire, and Two Dimensional Social Desirability Questionnaire. Afterward, interviews were conducted online by a single interviewer. Interviews were rated independently by two graduate students based on the coding manual. Results: The relationships of the awareness of effective parenting scores to the other measures demonstrate convergent, discriminant, and predictive validity of the coding manual. Intra-class correlation coefficient estimates were ranged between 0.82 and 0.90, showing high interrater reliability of the coding manual. Conclusion: Taken as a whole, the results of these studies demonstrate the validity and reliability of a new and useful interview to measure maternal awareness regarding effective parenting within the framework of the domains-of-socialization approach.

Keywords: domains of socialization, parenting, interview, assessment

Procedia PDF Downloads 157
231 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey

Authors: Mahdiyeh Zafaranchi

Abstract:

With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.

Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters

Procedia PDF Downloads 89
230 Review on Implementation of Artificial Intelligence and Machine Learning for Controlling Traffic and Avoiding Accidents

Authors: Neha Singh, Shristi Singh

Abstract:

Accidents involving motor vehicles are more likely to cause serious injuries and fatalities. It also has a host of other perpetual issues, such as the regular loss of life and goods in accidents. To solve these issues, appropriate measures must be implemented, such as establishing an autonomous incident detection system that makes use of machine learning and artificial intelligence. In order to reduce traffic accidents, this article examines the overview of artificial intelligence and machine learning in autonomous event detection systems. The paper explores the major issues, prospective solutions, and use of artificial intelligence and machine learning in road transportation systems for minimising traffic accidents. There is a lot of discussion on additional, fresh, and developing approaches that less frequent accidents in the transportation industry. The study structured the following subtopics specifically: traffic management using machine learning and artificial intelligence and an incident detector with these two technologies. The internet of vehicles and vehicle ad hoc networks, as well as the use of wireless communication technologies like 5G wireless networks and the use of machine learning and artificial intelligence for the planning of road transportation systems, are elaborated. In addition, safety is the primary concern of road transportation. Route optimization, cargo volume forecasting, predictive fleet maintenance, real-time vehicle tracking, and traffic management, according to the review's key conclusions, are essential for ensuring the safety of road transportation networks. In addition to highlighting research trends, unanswered problems, and key research conclusions, the study also discusses the difficulties in applying artificial intelligence to road transport systems. Planning and managing the road transportation system might use the work as a resource.

Keywords: artificial intelligence, machine learning, incident detector, road transport systems, traffic management, automatic incident detection, deep learning

Procedia PDF Downloads 78
229 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters

Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens

Abstract:

Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.

Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,

Procedia PDF Downloads 412
228 Role of Energy Storage in Renewable Electricity Systems in The Gird of Ethiopia

Authors: Dawit Abay Tesfamariam

Abstract:

Ethiopia’s Climate- Resilient Green Economy (ECRGE) strategy focuses mainly on generating and proper utilization of renewable energy (RE). Nonetheless, the current electricity generation of the country is dominated by hydropower. The data collected in 2016 by Ethiopian Electric Power (EEP) indicates that the intermittent RE sources from solar and wind energy were only 8 %. On the other hand, the EEP electricity generation plan in 2030 indicates that 36.1 % of the energy generation share will be covered by solar and wind sources. Thus, a case study was initiated to model and compute the balance and consumption of electricity in three different scenarios: 2016, 2025, and 2030 using the EnergyPLAN Model (EPM). Initially, the model was validated using the 2016 annual power-generated data to conduct the EnergyPLAN (EP) analysis for two predictive scenarios. The EP simulation analysis using EPM for 2016 showed that there was no significant excess power generated. Thus, the EPM was applied to analyze the role of energy storage in RE in Ethiopian grid systems. The results of the EP simulation analysis showed there will be excess production of 402 /7963 MW average and maximum, respectively, in 2025. The excess power was in the three rainy months of the year (June, July, and August). The outcome of the model also showed that in the dry seasons of the year, there would be excess power production in the country. Consequently, based on the validated outcomes of EP indicates, there is a good reason to think about other alternatives for the utilization of excess energy and storage of RE. Thus, from the scenarios and model results obtained, it is realistic to infer that if the excess power is utilized with a storage system, it can stabilize the grid system and be exported to support the economy. Therefore, researchers must continue to upgrade the current and upcoming storage system to synchronize with potentials that can be generated from renewable energy.

Keywords: renewable energy, power, storage, wind, energy plan

Procedia PDF Downloads 53
227 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 294
226 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries

Authors: Gaurav Kumar Sinha

Abstract:

In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.

Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency

Procedia PDF Downloads 38
225 Geosynthetic Tubes in Coastal Structures a Better Substitute for Shorter Planning Horizon: A Case Study

Authors: A. Pietro Rimoldi, B. Anilkumar Gopinath, C. Minimol Korulla

Abstract:

Coastal engineering structure is conventionally designed for a shorter planning horizon usually 20 years. These structures are subjected to different offshore climatic externalities like waves, tides, tsunamis etc. during the design life period. The probability of occurrence of these different offshore climatic externalities varies. The impact frequently caused by these externalities on the structures is of concern because it has a significant bearing on the capital /operating cost of the project. There can also be repeated short time occurrence of these externalities in the assumed planning horizon which can cause heavy damage to the conventional coastal structure which are mainly made of rock. A replacement of the damaged portion to prevent complete collapse is time consuming and expensive when dealing with hard rock structures. But if coastal structures are made of Geo-synthetic containment systems such replacement is quickly possible in the time period between two successive occurrences. In order to have a better knowledge and to enhance the predictive capacity of these occurrences, this study estimates risk of encounter within the design life period of various externalities based on the concept of exponential distribution. This gives an idea of the frequency of occurrences which in turn gives an indication of whether replacement is necessary and if so at what time interval such replacements have to be effected. To validate this theoretical finding, a pilot project has been taken up in the field so that the impact of the externalities can be studied both for a hard rock and a Geosynthetic tube structure. The paper brings out the salient feature of a case study which pertains to a project in which Geosynthetic tubes have been used for reformation of a seawall adjacent to a conventional rock structure in Alappuzha coast, Kerala, India. The effectiveness of the Geosystem in combatting the impact of the short-term externalities has been brought out.

Keywords: climatic externalities, exponential distribution, geosystems, planning horizon

Procedia PDF Downloads 215
224 A Profile of an Exercise Addict: The Relationship between Exercise Addiction and Personality

Authors: Klary Geisler, Dalit Lev-Arey, Yael Hacohen

Abstract:

It is a well-known fact that exercise has favorable effects on people's physical health, as well as mental well-being. However, as for as excessive exercise, it may likely elevate negative consequences (e.g., physical injuries, negligence of everyday responsibilities such as work, family life). Lately, there is a growing interest in exercise addiction, sometimes referred to as exercise dependence, which is defined as a craving for physical activity that results in extreme work-out sessions and generates negative physiological and psychological symptoms (e.g., withdrawal symptoms, tolerance, social conflict). Exercise addiction is considered a behavioral addiction, yet it was not included in the latest editions of the diagnostic and statistical manual of mental disorders (DSM-IV), due to lack of significant research. Specifically, there is scarce research on the relationship between exercise addiction and personality dimensions. The purpose of the current research was to examine the relationship between primary exercise addiction symptoms and the big five dimensions, perfectionism (high performance expectations and self-critical performance evaluations) and subjective affect. participants were 152 trainees on a variety of aerobic sports activities (running, cycling, swimming) that were recruited through sports groups and trainers. 88% of participants trained for at least 5 hours per week, 24% of the participants trained above 10 hours per week. To test the predictive ability of the IVs a hierarchical linear regression with forced block entry was performed. It was found that Neuroticism significantly predicted exercise addiction symptoms (20% of the variance, p<0.001), while consciousness was negatively correlated with exercise addiction symptoms (14% of variance p<0.05); both had a unique contribution. Other dimensions of the big five (agreeableness, openness and extraversion) did not have any contribution to the dependent. Moreover, maladaptive perfectionism (self-critical performance evaluations) significantly predicted exercise addiction symptoms as well (10% of the variance P < 0.05). The overall regression model explained 54% of variance.

Keywords: big five, consciousness, excessive exercise, exercise addiction, neuroticism, perfectionism, personality

Procedia PDF Downloads 200
223 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 81
222 The Predictive Implication of Executive Function and Language in Theory of Mind Development in Preschool Age Children

Authors: Michael Luc Andre, Célia Maintenant

Abstract:

Theory of mind is a milestone in child development which allows children to understand that others could have different mental states than theirs. Understanding the developmental stages of theory of mind in children leaded researchers on two Connected research problems. In one hand, the link between executive function and theory of mind, and on the other hand, the relationship of theory of mind and syntax processing. These two lines of research involved a great literature, full of important results, despite certain level of disagreement between researchers. For a long time, these two research perspectives continue to grow up separately despite research conclusion suggesting that the three variables should implicate same developmental period. Indeed, our goal was to study the relation between theory of mind, executive function, and language via a unique research question. It supposed that between executive function and language, one of the two variables could play a critical role in the relationship between theory of mind and the other variable. Thus, 112 children aged between three and six years old were recruited for completing a receptive and an expressive vocabulary task, a syntax understanding task, a theory of mind task, and three executive function tasks (inhibition, cognitive flexibility and working memory). The results showed significant correlations between performance on theory of mind task and performance on executive function domain tasks, except for cognitive flexibility task. We also found significant correlations between success on theory of mind task and performance in all language tasks. Multiple regression analysis justified only syntax and general abilities of language as possible predictors of theory of mind performance in our preschool age children sample. The results were discussed in the perspective of a great role of language abilities in theory of mind development. We also discussed possible reasons that could explain the non-significance of executive domains in predicting theory of mind performance, and the meaning of our results for the literature.

Keywords: child development, executive function, general language, syntax, theory of mind

Procedia PDF Downloads 40