Search results for: predictive collision avoidance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1438

Search results for: predictive collision avoidance

298 Consumer Knowledge of Food Quality Assurance and Use of Food Labels in Trinidad, West Indies

Authors: Daryl Clement Knutt, Neela Badrie, Marsha Singh

Abstract:

Quality assurance and product labelling are vital in the food and drink industry, as a tactical tool in a competitive environment. The food label is a principal marketing tool which also serves as a regulatory mechanism in the safeguarding of consumer well –being. The objective of this study was to evaluate the level of consumers’ use and understanding of food labeling information and knowledge pertaining to food quality assurance systems. The study population consisted of Trinidadian adults, who were over the age of 18 (n=384). Data collection was conducted via a self-administered questionnaire, which contained 31 questions, comprising of four sections: I. socio demographic information; II. food quality and quality assurance; III. use of Labeling information; and IV. laws and regulations. Sampling was conducted at six supermarkets, in five major regions of the country over a period of three weeks in 2014. The demographic profile of the shoppers revealed that majority was female (63.6%). The gender factor and those who were concerned about the nutrient content of their food, were predictive indicators of those who read food labels. Most (93.1%) read food labels before purchase, 15.4% ‘always’; 32.5% ‘most times’ and 45.2% ‘sometimes’. Some (42%) were often satisfied with the information presented on food labels, whilst 35.7% of consumers were unsatisfied. When the respondents were questioned on their familiarity with terms ‘food quality’ and ‘food quality assurance’, 21.3% of consumers replied positively - ‘I have heard the terms and know a lot’ whilst 37% were only ‘somewhat familiar’. Consumers were mainly knowledgeable of the International Standard of Organization (ISO) (51.5%) and Good Agricultural Practices GAP (38%) as quality tools. Participants ranked ‘nutritional information’ as the number one labeling element that should be better presented, followed by ‘allergy notes’ and ‘best before date’. Females were more inclined to read labels being the household shoppers. The shoppers would like better presentation of the food labelling information so as to guide their decision to purchase a product.

Keywords: food labels, food quality, nutrition, marketing, Trinidad, Tobago

Procedia PDF Downloads 460
297 The Predictive Value of Micro Rna 451 on the Outcome of Imatinib Treatment in Chronic Myeloid Leukemia Patients

Authors: Nehal Adel Khalil, Amel Foad Ketat, Fairouz Elsayed Mohamed Ali, Nahla Abdelmoneim Hamid, Hazem Farag Manaa

Abstract:

Background: Chronic myeloid leukemia (CML) represents 15% of adult leukemias. Imatinib Mesylate (IM) is the gold standard treatment for new cases of CML. Treatment with IM results in improvement of the majority of cases. However, about 25% of cases may develop resistance. Sensitive and specific early predictors of IM resistance in CML patients have not been established to date. Aim: To investigate the value of miR-451 in CML as an early predictor for IM resistance in Egyptian CML patients. Methods: The study employed Real time Polymerase Reaction (qPCR) technique to investigate the leucocytic expression of miR-451 in fifteen newly diagnosed CML patients (group I), fifteen IM responder CML patients (group II), fifteen IM resistant CML patients (group III) and fifteen healthy subjects of matched age and sex as a control group (group IV). The response to IM was defined as < 10% BCR-ABL transcript level after 3 months of therapy. The following parameters were assessed in subjects of all the studied groups: 1- Complete blood count (CBC). 2- Measurement of plasma level of miRNA 451 using real-time Polymerase Chain Reaction (qPCR). 3- Detection of BCR-ABL gene mutation in CML using qPCR. Results: The present study revealed that miR-451 was significantly down-regulated in leucocytes of newly diagnosed CML patients as compared to healthy subjects. IM responder CML patients showed an up-regulation of miR- 451 compared with IM resistant CML patients. Conclusion: According to the data from the present study, it can be concluded that leucocytic miR- 451 expression is a useful additional follow-up marker for the response to IM and a promising prognostic biomarker for CML.

Keywords: chronic myeloid leukemia, imatinib resistance, microRNA 451, Polymerase Chain Reaction

Procedia PDF Downloads 278
296 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression

Authors: Wu Peng, Anders Liljerehn, Martin Magnevall

Abstract:

In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.

Keywords: cutting force, kienzle model, predictive model, tool flank wear

Procedia PDF Downloads 83
295 Understanding and Explaining Urban Resilience and Vulnerability: A Framework for Analyzing the Complex Adaptive Nature of Cities

Authors: Richard Wolfel, Amy Richmond

Abstract:

Urban resilience and vulnerability are critical concepts in the modern city due to the increased sociocultural, political, economic, demographic, and environmental stressors that influence current urban dynamics. Urban scholars need help explaining urban resilience and vulnerability. First, cities are dominated by people, which is challenging to model, both from an explanatory and a predictive perspective. Second, urban regions are highly recursive in nature, meaning they not only influence human action, but the structures of cities are constantly changing due to human actions. As a result, explanatory frameworks must continuously evolve as humans influence and are influenced by the urban environment in which they operate. Finally, modern cities have populations, sociocultural characteristics, economic flows, and environmental impacts on order of magnitude well beyond the cities of the past. As a result, the frameworks that seek to explain the various functions of a city that influence urban resilience and vulnerability must address the complex adaptive nature of cities and the interaction of many distinct factors that influence resilience and vulnerability in the city. This project develops a taxonomy and framework for organizing and explaining urban vulnerability. The framework is built on a well-established political development model that includes six critical classes of urban dynamics: political presence, political legitimacy, political participation, identity, production, and allocation. In addition, the framework explores how environmental security and technology influence and are influenced by the six elements of political development. The framework aims to identify key tipping points in society that act as influential agents of urban vulnerability in a region. This will help analysts and scholars predict and explain the influence of both physical and human geographical stressors in a dense urban area.

Keywords: urban resilience, vulnerability, sociocultural stressors, political stressors

Procedia PDF Downloads 89
294 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction

Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum

Abstract:

Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.

Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography

Procedia PDF Downloads 303
293 How to Talk about It without Talking about It: Cognitive Processing Therapy Offers Trauma Symptom Relief without Violating Cultural Norms

Authors: Anne Giles

Abstract:

Humans naturally wish they could forget traumatic experiences. To help prevent future harm, however, the human brain has evolved to retain data about experiences of threat, alarm, or violation. When given compassionate support and assistance with thinking helpfully and realistically about traumatic events, most people can adjust to experiencing hardships, albeit with residual sad, unfortunate memories. Persistent, recurrent, intrusive memories, difficulty sleeping, emotion dysregulation, and avoidance of reminders, however, may be symptoms of Post-traumatic Stress Disorder (PTSD). Brain scans show that PTSD affects brain functioning. We currently have no physical means of restoring the system of brain structures and functions involved with PTSD. Medications may ease some symptoms but not others. However, forms of "talk therapy" with cognitive components have been found by researchers to reduce, even resolve, a broad spectrum of trauma symptoms. Many cultures have taboos against talking about hardships. Individuals may present themselves to mental health care professionals with severe, disabling trauma symptoms but, because of cultural norms, be unable to speak about them. In China, for example, relationship expectations may include the belief, "Bad things happening in the family should stay in the family (jiāchǒu bùkě wàiyán 家丑不可外扬)." The concept of "family (jiā 家)" may include partnerships, close and extended families, communities, companies, and the nation itself. In contrast to many trauma therapies, Cognitive Processing Therapy (CPT) for Post-traumatic Stress Disorder asks its participants to focus not on "what" happened but on "why" they think the trauma(s) occurred. The question "why" activates and exercises cognitive functioning. Brain scans of individuals with PTSD reveal executive functioning portions of the brain inadequately active, with emotion centers overly active. CPT conceptualizes PTSD as a network of cognitive distortions that keep an individual "stuck" in this under-functioning and over-functioning dynamic. Through asking participants forms of the question "why," plus offering a protocol for examining answers and relinquishing unhelpful beliefs, CPT assists individuals in consciously reactivating the cognitive, executive functions of their brains, thus restoring normal functioning and reducing distressing trauma symptoms. The culturally sensitive components of CPT that allow people to "talk about it without talking about it" may offer the possibility for worldwide relief from symptoms of trauma.

Keywords: cognitive processing therapy (CPT), cultural norms, post-traumatic stress disorder (PTSD), trauma recovery

Procedia PDF Downloads 184
292 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis

Authors: Syed Asif Hassan, Syed Atif Hassan

Abstract:

Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.

Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction

Procedia PDF Downloads 364
291 Building Transparent Supply Chains through Digital Tracing

Authors: Penina Orenstein

Abstract:

In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.

Keywords: data mining, supply chain, empirical research, data mapping

Procedia PDF Downloads 150
290 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 355
289 Application of Flory Paterson’s Theory on the Volumetric Properties of Liquid Mixtures: 1,2-Dichloroethane with Aliphatic and Cyclic Ethers

Authors: Linda Boussaid, Farid Brahim Belaribi

Abstract:

The physico-chemical properties of liquid materials in the industrial field, in general, and in that of the chemical industries, in particular, constitutes a prerequisite for the design of equipment, for the resolution of specific problems (related to the techniques of purification and separation, at risk in the transport of certain materials, etc.) and, therefore, at the production stage. Chloroalkanes, ethers constitute three chemical families having an industrial, theoretical and environmental interest. For example, these compounds are used in various applications in the chemical and pharmaceutical industries. In addition, they contribute to the particular thermodynamic behavior (deviation from ideality, association, etc.) of certain mixtures which constitute a severe test for predictive theoretical models. Finally, due to the degradation of the environment in the world, a renewed interest is observed for ethers, because some of their physicochemical properties could contribute to lower pollution (ethers would be used as additives in aqueous fuels.). This work is a thermodynamic, experimental and theoretical study of the volumetric properties of liquid binary systems formed from compounds belonging to the chemical families of chloroalkanes, ethers, having an industrial, theoretical and environmental interest. Experimental determination of the densities and excess volumes of the systems studied, at different temperatures in the interval [278.15-333.15] K and at atmospheric pressure, using an AntonPaar vibrating tube densitometer of the DMA5000 type. This contribution of experimental data, on the volumetric properties of the binary liquid mixtures of 1,2-dichloroethane with an ether, supplemented by an application of the theoretical model of Prigogine-Flory-Patterson PFP, will probably contribute to the enrichment of the thermodynamic database and the further development of the theory of Flory in its Prigogine-Flory-Patterson (PFP) version, for a better understanding of the thermodynamic behavior of these liquid binary mixtures

Keywords: prigogine-flory-patterson (pfp), propriétés volumétrique , volume d’excés, ethers

Procedia PDF Downloads 71
288 Mechanical Testing of Composite Materials for Monocoque Design in Formula Student Car

Authors: Erik Vassøy Olsen, Hirpa G. Lemu

Abstract:

Inspired by the Formula-1 competition, IMechE (Institute of Mechanical Engineers) and Formula SAE (Society of Mechanical Engineers) organize annual competitions for University and College students worldwide to compete with a single-seat race car they have designed and built. The design of the chassis or the frame is a key component of the competition because the weight and stiffness properties are directly related with the performance of the car and the safety of the driver. In addition, a reduced weight of the chassis has a direct influence on the design of other components in the car. Among others, it improves the power to weight ratio and the aerodynamic performance. As the power output of the engine or the battery installed in the car is limited to 80 kW, increasing the power to weight ratio demands reduction of the weight of the chassis, which represents the major part of the weight of the car. In order to reduce the weight of the car, ION Racing team from the University of Stavanger, Norway, opted for a monocoque design. To ensure fulfilment of the above-mentioned requirements of the chassis, the monocoque design should provide sufficient torsional stiffness and absorb the impact energy in case of a possible collision. The study reported in this article is based on the requirements for Formula Student competition. As part of this study, diverse mechanical tests were conducted to determine the mechanical properties and performances of the monocoque design. Upon a comprehensive theoretical study of the mechanical properties of sandwich composite materials and the requirements of monocoque design in the competition rules, diverse tests were conducted including 3-point bending test, perimeter shear test and test for absorbed energy. The test panels were homemade and prepared with an equivalent size of the side impact zone of the monocoque, i.e. 275 mm x 500 mm so that the obtained results from the tests can be representative. Different layups of the test panels with identical core material and the same number of layers of carbon fibre were tested and compared. Influence of the core material thickness was also studied. Furthermore, analytical calculations and numerical analysis were conducted to check compliance to the stated rules for Structural Equivalency with steel grade SAE/AISI 1010. The test results were also compared with calculated results with respect to bending and torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition and strength of the composite material selected for the monocoque design has equivalent structural properties as a welded frame and thus comply with the competition requirements. The developed analytical calculation algorithms and relations will be useful for future monocoque designs with different lay-ups and compositions.

Keywords: composite material, Formula student, ION racing, monocoque design, structural equivalence

Procedia PDF Downloads 479
287 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 157
286 Systematic Review of Associations between Interoception, Vagal Tone, and Emotional Regulation

Authors: Darren Edwards, Thomas Pinna

Abstract:

Background: Interoception and heart rate variability have been found to predict outcomes of mental health and well-being. However, these have usually been investigated independently of one another. Objectives: This review aimed to explore the associations between interoception and heart rate variability (HRV) with emotion regulation (ER) and ER strategies within the existing literature and utilizing systematic review methodology. Methods: The process of article retrieval and selection followed the preferred reporting items for systematic review and meta-analyses (PRISMA) guidelines. Databases PsychINFO, Web of Science, PubMed, CINAHL, and MEDLINE were scanned for papers published. Preliminary inclusion and exclusion criteria were specified following the patient, intervention, comparison, and outcome (PICO) framework, whilst the checklist for critical appraisal and data extraction for systematic reviews of prediction modeling studies (CHARMS) framework was used to help formulate the research question, and to critically assess for bias in the identified full-length articles. Results: 237 studies were identified after initial database searches. Of these, eight studies were included in the final selection. Six studies explored the associations between HRV and ER, whilst three investigated the associations between interoception and ER (one of which was included in the HRV selection too). Overall, the results seem to show that greater HRV and interoception are associated with better ER. Specifically, high parasympathetic activity largely predicted the use of adaptive ER strategies such as reappraisal, and better acceptance of emotions. High interoception, instead, was predictive of effective down-regulation of negative emotions and handling of social uncertainty, there was no association with any specific ER strategy. Conclusions: Awareness of one’s own bodily feelings and vagal activation seem to be of central importance for the effective regulation of emotional responses.

Keywords: emotional regulation, vagal tone, interoception, chronic conditions, health and well-being, psychological flexibility

Procedia PDF Downloads 90
285 Empirical Testing of Hofstede’s Measures of National Culture: A Study in Four Countries

Authors: Nebojša Janićijević

Abstract:

At the end of 1970s, Dutch researcher Geert Hofstede, had conducted an enormous empirical research on the differences between national cultures. In his huge research, he had identified four dimensions of national culture according to which national cultures differ and determined the index for every dimension of national culture for each country that took part in his research. The index showed a country’s position on the continuum between the two extreme poles of the cultural dimensions. Since more than 40 years have passed since Hofstede's research, there is a doubt that, due to the changes in national cultures during that period, they are no longer a good basis for research. The aim of this research is to check the validity of Hofstee's indices of national culture The empirical study conducted in the branches of a multinational company in Serbia, France, the Netherlands and Denmark aimed to determine whether Hofstede’s measures of national culture dimensions are still valid. The sample consisted of 155 employees of one multinational company, where 40 employees came from three countries and 35 employees were from Serbia. The questionnaire that analyzed the positions of national cultures according to the Hofstede’s four dimensions was formulated on the basis of the initial Hofstede’s questionnaire, but it was much shorter and significantly simplified comparing to the original questionnaire. Such instrument had already been used in earlier researches. A statistical analysis of the obtained questionnaire results was done by a simple calculation of the frequency of the provided answers. Due to the limitations in methodology, sample size, instrument, and applied statistical methods, the aim of the study was not to explicitly test the accuracy Hofstede’s indexes but to enlighten the general position of the four observed countries in national culture dimensions and their mutual relations. The study results have indicated that the position of the four observed national cultures (Serbia, France, the Netherlands and Denmark) is precisely the same in three out of four dimensions as Hofstede had described in his research. Furthermore, the differences between national cultures and the relative relations between their positions in three dimensions of national culture correspond to Hofstede’s results. The only deviation from Hofstede’s results is concentrated around the masculinity–femininity dimension. In addition, the study revealed that the degree of power distance is a determinant when choosing leadership style. It has been found that national cultures with high power distance, like Serbia and France, favor one of the two authoritative leadership styles. On the other hand, countries with low power distance, such as the Netherlands and Denmark, prefer one of the forms of democratic leadership styles. This confirms Hofstede’s premises about the impact of power distance on leadership style. The key contribution of the study is that Hofstede’s national culture indexes are still a reliable tool for measuring the positions of countries in national culture dimensions, and they can be applied in the cross-cultural research in management. That was at least the case with four observed countries: Serbia, France, the Netherlands, and Denmark.

Keywords: national culture, leadership styles, power distance, collectivism, masculinity, uncertainty avoidance

Procedia PDF Downloads 45
284 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 16
283 Factors Associated with Acute Kidney Injury in Multiple Trauma Patients with Rhabdomyolysis

Authors: Yong Hwang, Kang Yeol Suh, Yundeok Jang, Tae Hoon Kim

Abstract:

Introduction: Rhabdomyolysis is a syndrome characterized by muscle necrosis and the release of intracellular muscle constituents into the circulation. Acute kidney injury is a potential complication of severe rhabdomyolysis and the prognosis is substantially worse if renal failure develops. We try to identify the factors that were predictive of AKI in severe trauma patients with rhabdomyolysis. Methods: This retrospective study was conducted at the emergency department of a level Ⅰ trauma center. Patients enrolled that initial creatine phosphokinase (CPK) levels were higher than 1000 IU with acute multiple trauma, and more than 18 years older from Oct. 2012 to June 2016. We collected demographic data (age, gender, length of hospital day, and patients’ outcome), laboratory data (ABGA, lactate, hemoglobin. hematocrit, platelet, LDH, myoglobin, liver enzyme, and BUN/Cr), and clinical data (Injury Mechanism, RTS, ISS, AIS, and TRISS). The data were compared and analyzed between AKI and Non-AKI group. Statistical analyses were performed using IMB SPSS 20.0 statistics for Window. Results: Three hundred sixty-four patients were enrolled that AKI group were ninety-six and non-AKI group were two hundred sixty-eight. The base excess (HCO3), AST/ALT, LDH, and myoglobin in AKI group were significantly higher than non-AKI group from laboratory data (p ≤ 0.05). The injury severity score (ISS), revised Trauma Score (RTS), Abbreviated Injury Scale 3 and 4 (AIS 3 and 4) were showed significant results in clinical data. The patterns of CPK level were increased from first and second day, but slightly decreased from third day in both group. Seven patients had received hemodialysis treatment despite the bleeding risk and were survived in AKI group. Conclusion: We recommend that HCO3, CPK, LDH, and myoglobin should be checked and be concerned about ISS, RTS, AIS with injury mechanism at the early stage of treatment in the emergency department.

Keywords: acute kidney injury, emergencies, multiple trauma, rhabdomyolysis

Procedia PDF Downloads 311
282 Optimization of Ultrasound Assisted Extraction of Polysaccharides from Plant Waste Materials: Selected Model Material is Hazelnut Skin

Authors: T. Yılmaz, Ş. Tavman

Abstract:

In this study, optimization of ultrasound assisted extraction (UAE) of hemicellulose based polysaccharides from plant waste material has been studied. Selected material is hazelnut skin. Extraction variables for the operation are extraction time, amplitude and application temperature. Optimum conditions have been evaluated depending on responses such as amount of wet crude polysaccharide, total carbohydrate content and dried sample. Pretreated hazelnut skin powders were used for the experiments. 10 grams of samples were suspended in 100 ml water in a jacketed vessel with additional magnetic stirring. Mixture was sonicated by immersing ultrasonic probe processor. After the extraction procedures, ethanol soluble and insoluble sides were separated for further examinations. The obtained experimental data were analyzed by analysis of variance (ANOVA). Second order polynomial models were developed using multiple regression analysis. The individual and interactive effects of applied variables were evaluated by Box Behnken Design. The models developed from the experimental design were predictive and good fit with the experimental data with high correlation coefficient value (R2 more than 0.95). Extracted polysaccharides from hazelnut skin are assumed to be pectic polysaccharides according to the literature survey of Fourier Transform Spectrometry (FTIR) analysis results. No more change can be observed between spectrums of different sonication times. Application of UAE at optimized condition has an important effect on extraction of hemicellulose from plant material by satisfying partial hydrolysis to break the bounds with other components in plant cell wall material. This effect can be summarized by varied intensity of microjets and microstreaming at varied sonication conditions.

Keywords: hazelnut skin, optimization, polysaccharide, ultrasound assisted extraction

Procedia PDF Downloads 308
281 Simulation of GAG-Analogue Biomimetics for Intervertebral Disc Repair

Authors: Dafna Knani, Sarit S. Sivan

Abstract:

Aggrecan, one of the main components of the intervertebral disc (IVD), belongs to the family of proteoglycans (PGs) that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein. Its primary function is to maintain tissue hydration and hence disc height under the high loads imposed by muscle activity and body weight. Significant PG loss is one of the first indications of disc degeneration. A possible solution to recover disc functions is by injecting a synthetic hydrogel into the joint cavity, hence mimicking the role of PGs. One of the hydrogels proposed is GAG-analogues, based on sulfate-containing polymers, which are responsible for hydration in disc tissue. In the present work, we used molecular dynamics (MD) to study the effect of the hydrogel crosslinking (type and degree) on the swelling behavior of the suggested GAG-analogue biomimetics by calculation of cohesive energy density (CED), solubility parameter, enthalpy of mixing (ΔEmix) and the interactions between the molecules at the pure form and as a mixture with water. The simulation results showed that hydrophobicity plays an important role in the swelling of the hydrogel, as indicated by the linear correlation observed between solubility parameter values of the copolymers and crosslinker weight ratio (w/w); this correlation was found useful in predicting the amount of PEGDA needed for the desirable hydration behavior of (CS)₄-peptide. Enthalpy of mixing calculations showed that all the GAG analogs, (CS)₄ and (CS)₄-peptide are water-soluble; radial distribution function analysis revealed that they form interactions with water molecules, which is important for the hydration process. To conclude, our simulation results, beyond supporting the experimental data, can be used as a useful predictive tool in the future development of biomaterials, such as disc replacement.

Keywords: molecular dynamics, proteoglycans, enthalpy of mixing, swelling

Procedia PDF Downloads 48
280 The Utility of Sonographic Features of Lymph Nodes during EBUS-TBNA for Predicting Malignancy

Authors: Atefeh Abedini, Fatemeh Razavi, Mihan Pourabdollah Toutkaboni, Hossein Mehravaran, Arda Kiani

Abstract:

In countries with the highest prevalence of tuberculosis, such as Iran, the differentiation of malignant tumors from non-malignant is very important. In this study, which was conducted for the first time among the Iranian population, the utility of the ultrasonographic morphological characteristics in patients undergoing EBUS was used to distinguish the non-malignant versus malignant lymph nodes. The morphological characteristics of lymph nodes, which consist of size, shape, vascular pattern, echogenicity, margin, coagulation necrosis sign, calcification, and central hilar structure, were obtained during Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration and were compared with the final pathology results. During this study period, a total of 253 lymph nodes were evaluated in 93 cases. Round shape, non-hilar vascular pattern, heterogeneous echogenicity, hyperechogenicity, distinct margin, and the presence of necrosis sign were significantly higher in malignant nodes. On the other hand, the presence of calcification and also central hilar structure were significantly higher in the benign nodes (p-value ˂ 0.05). Multivariate logistic regression showed that size>1 cm, heterogeneous echogenicity, hyperechogenicity, the presence of necrosis signs and, the absence of central hilar structure are independent predictive factors for malignancy. The accuracy of each of the aforementioned factors is 42.29 %, 71.54 %, 71.90 %, 73.51 %, and 65.61 %, respectively. Of 74 malignant lymph nodes, 100% had at least one of these independent factors. According to our results, the morphological characteristics of lymph nodes based on Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration can play a role in the prediction of malignancy.

Keywords: EBUS-TBNA, malignancy, nodal characteristics, pathology

Procedia PDF Downloads 113
279 Nutritional Profile and Food Intake Trends amongst Hospital Dieted Diabetic Eye Disease Patients of India

Authors: Parmeet Kaur, Nighat Yaseen Sofi, Shakti Kumar Gupta, Veena Pandey, Rajvaedhan Azad

Abstract:

Nutritional status and prevailing blood glucose level trends amongst hospitalized patients has been linked to clinical outcome. Therefore, the present study was undertaken to assess hospitalized Diabetic Eye Disease (DED) patients' anthropometric and dietary intake trends. DED patients with type 1 or 2 diabetes > 20 years were enrolled. Actual food intake was determined by weighed food record method. Mifflin St Joer predictive equation multiplied by a combined stress and activity factor of 1.3 was applied to estimate caloric needs. A questionnaire was further administered to obtain reasons of inadequate dietary intake. Results indicated validity of joint analyses of body mass index in combination with waist circumference for clinical risk prediction. Dietary data showed a significant difference (p < 0.0005) between average daily caloric and carbohydrate intake and actual daily caloric and carbohydrate needs. Mean fasting and post-prandial plasma glucose levels were 150.71 ± 72.200 mg/dL and 219.76 ± 97.365 mg/dL, respectively. Improvement in food delivery systems and nutrition educations were indicated for reducing plate waste and to enable better understanding of dietary aspects of diabetes management. A team approach of nurses, physicians and other health care providers is required besides the expertise of dietetics professional. To conclude, findings of the present study will be useful in planning nutritional care process (NCP) for optimizing glucose control as a component of quality medical nutrition therapy (MNT) in hospitalized DED patients.

Keywords: nutritional status, diabetic eye disease, nutrition care process, medical nutrition therapy

Procedia PDF Downloads 334
278 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 134
277 Basics of Gamma Ray Burst and Its Afterglow

Authors: Swapnil Kumar Singh

Abstract:

Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.

Keywords: GRB, synchrotron, X-ray, isotropic energy

Procedia PDF Downloads 70
276 Derivatives Balance Method for Linear and Nonlinear Control Systems

Authors: Musaab Mohammed Ahmed Ali, Vladimir Vodichev

Abstract:

work deals with an universal control technique or single controller for linear and nonlinear stabilization and tracing control systems. These systems may be structured as SISO and MIMO. Parameters of controlled plants can vary over a wide range. Introduced a novel control systems design method, construction of stable platform orbits using derivative balance, solved transfer function stability preservation problem of linear system under partial substitution of a rational function. Universal controller is proposed as a polar system with the multiple orbits to simplify design procedure, where each orbit represent single order of controller transfer function. Designed controller consist of proportional, integral, derivative terms and multiple feedback and feedforward loops. The controller parameters synthesis method is presented. In generally, controller parameters depend on new polynomial equation where all parameters have a relationship with each other and have fixed values without requirements of retuning. The simulation results show that the proposed universal controller can stabilize infinity number of linear and nonlinear plants and shaping desired previously ordered performance. It has been proven that sensor errors and poor performance will be completely compensated and cannot affect system performance. Disturbances and noises effect on the controller loop will be fully rejected. Technical and economic effect of using proposed controller has been investigated and compared to adaptive, predictive, and robust controllers. The economic analysis shows the advantage of single controller with fixed parameters to drive infinity numbers of plants compared to above mentioned control techniques.

Keywords: derivative balance, fixed parameters, stable platform, universal control

Procedia PDF Downloads 112
275 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI

Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist

Abstract:

Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.

Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma

Procedia PDF Downloads 123
274 A Qualitative Exploration of the Beliefs and Experiences of HIV-Related Self-Stigma Amongst Young Adults Living with HIV in Zimbabwe

Authors: Camille Rich, Nadine Ferris France, Ann Nolan, Webster Mavhu, Vongai Munatsi

Abstract:

Background and Aim: Zimbabwe has one of the highest HIV rates in the world, with a 12.7% adult prevalence rate. Young adults are a key group affected by HIV, and one-third of all new infections in Zimbabwe are amongst people ages 18-24 years. Stigma remains one of the main barriers to managing and reducing the HIV crisis, especially for young adults. There are several types of stigma, including enacted stigma, the outward discrimination towards someone and self-stigma, the negative self-judgments one has towards themselves. Self-stigma can have severe consequences, including feelings of worthlessness, shame, suicidal thoughts, and avoidance of medical help. This can have detrimental effects on those living with HIV. However, the unique beliefs and impacts of self-stigma amongst key groups living with HIV have not yet been explored. Therefore, the focus of this study is on the beliefs and experiences of HIV-related self-stigma, as experienced by young adults living in Harare, Zimbabwe. Research Methods: A qualitative approach was taken for this study, using sixteen semi-structured interviews with young adults (18-24 years) who are living with HIV in Harare. Participants were conveniently and purposefully sampled as members of Africa, an organization dedicated to young people living with HIV. Interviews were conducted over Zoom due to the COVID-19 pandemic, recorded and then coded using the software NVivo. The data was analyzed using both inductive and deductive Thematic Analysis to find common themes. Results: All of the participants experienced HIV-related self-stigma, and both beliefs and experiences were explored. These negative self-perceptions included beliefs of worthlessness, hopelessness, and negative body image. The young adults described believing they were not good enough to be around HIV negative people or that they could never be loved due to their HIV status. Developing self-stigmatizing thoughts came from internalizing negative cultural values, stereotypes about people living with HIV, and adverse experiences. Three main themes of self-stigmatizing experiences emerged: disclosure difficulties, relationship complications, and being isolated. Fear of telling someone their status, rejection in a relationship, and being excluded by others due to their HIV status contributed to their self-stigma. These experiences caused feelings of loneliness, sadness, shame, fear, and low self-worth. Conclusions: This study explored the beliefs and experiences of HIV-related self-stigma of these young adults. The emergence of negative self-perceptions demonstrated deep-rooted beliefs of HIV-related self-stigma that adversely impact the participants. The negative self-perceptions and self-stigmatizing experiences caused the participants to feel worthless, hopeless, shameful, and alone-negatively impacting their physical and mental health, personal relationships, and sense of self-identity. These results can now be used to pursue interventions to target the specific beliefs and experiences of young adults living with HIV and reduce the adverse consequences of self-stigma.

Keywords: beliefs, HIV, self-stigma, stigma, Zimbabwe

Procedia PDF Downloads 91
273 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 278
272 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling

Authors: Dong Wu, Michael Grenn

Abstract:

Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.

Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction

Procedia PDF Downloads 53
271 The Medical Student Perspective on the Role of Doubt in Medical Education

Authors: Madhavi-Priya Singh, Liam Lowe, Farouk Arnaout, Ludmilla Pillay, Giordan Perez, Luke Mischker, Steve Costa

Abstract:

Introduction: An Emergency Department consultant identified the failure of medical students to complete the task of clerking a patient in its entirety. As six medical students on our first clinical placement, we recognised our own failure and endeavored to examine why this failure was consistent among all medical students that had been given this task, despite our best motivations as adult learners. Aim: Our aim is to understand and investigate the elements which impeded our ability to learn and perform as medical students in the clinical environment, with reference to the prescribed task. We also aim to generate a discussion around the delivery of medical education with potential solutions to these barriers. Methods: Six medical students gathered together to have a comprehensive reflective discussion to identify possible factors leading to the failure of the task. First, we thoroughly analysed the delivery of the instructions with reference to the literature to identify potential flaws. We then examined personal, social, ethical, and cultural factors which may have impacted our ability to complete the task in its entirety. Results: Through collation of our shared experiences, with support from discussion in the field of medical education and ethics, we identified two major areas that impacted our ability to complete the set task. First, we experienced an ethical conflict where we believed the inconvenience and potential harm inflicted on patients did not justify the positive impact the patient interaction would have on our medical learning. Second, we identified a lack of confidence stemming from multiple factors, including the conflict between preclinical and clinical learning, perceptions of perfectionism in the culture of medicine, and the influence of upward social comparison. Discussion: After discussions, we found that the various factors we identified exacerbated the fears and doubts we already had about our own abilities and that of the medical education system. This doubt led us to avoid completing certain aspects of the tasks that were prescribed and further reinforced our vulnerability and perceived incompetence. Exploration of philosophical theories identified the importance of the role of doubt in education. We propose the need for further discussion around incorporating both pedagogic and andragogic teaching styles in clinical medical education and the acceptance of doubt as a driver of our learning. Conclusion: Doubt will continue to permeate our thoughts and actions no matter what. The moral or psychological distress that arises from this is the key motivating factor for our avoidance of tasks. If we accept this doubt and education embraces this doubt, it will no longer linger in the shadows as a negative and restrictive emotion but fuel a brighter dialogue and positive learning experience, ultimately assisting us in achieving our full potential.

Keywords: ethics, medical student, doubt, medical education, faith

Procedia PDF Downloads 87
270 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 426
269 Domains of Socialization Interview: Development and Psychometric Properties

Authors: Dilek Saritas Atalar, Cansu Alsancak Akbulut, İrem Metin Orta, Feyza Yön, Zeynep Yenen, Joan Grusec

Abstract:

Objective: The aim of this study was to develop semi-structured Domains of Socialization Interview and its coding manual and to test their psychometric properties. Domains of Socialization Interview was designed to assess maternal awareness regarding effective parenting in five socialization domains (protection, mutual reciprocity, control, guided learning, and group participation) within the framework of the domains-of-socialization approach. Method: A series of two studies were conducted to develop and validate the interview and its coding manual. The pilot study, sampled 13 mothers of preschool-aged children, was conducted to develop the assessment tools and to test their function and clarity. Participants of the main study were 82 Turkish mothers (Xage = 34.25, SD = 3.53) who have children aged between 35-76 months (Xage = 50.75, SD = 11.24). Mothers filled in a questionnaire package including Coping with Children’s Negative Emotions Questionnaire, Social Competence and Behavior Evaluation-30, Child Rearing Questionnaire, and Two Dimensional Social Desirability Questionnaire. Afterward, interviews were conducted online by a single interviewer. Interviews were rated independently by two graduate students based on the coding manual. Results: The relationships of the awareness of effective parenting scores to the other measures demonstrate convergent, discriminant, and predictive validity of the coding manual. Intra-class correlation coefficient estimates were ranged between 0.82 and 0.90, showing high interrater reliability of the coding manual. Conclusion: Taken as a whole, the results of these studies demonstrate the validity and reliability of a new and useful interview to measure maternal awareness regarding effective parenting within the framework of the domains-of-socialization approach.

Keywords: domains of socialization, parenting, interview, assessment

Procedia PDF Downloads 157