Search results for: multivariate models.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7105

Search results for: multivariate models.

4705 Count Regression Modelling on Number of Migrants in Households

Authors: Tsedeke Lambore Gemecho, Ayele Taye Goshu

Abstract:

The main objective of this study is to identify the determinants of the number of international migrants in a household and to compare regression models for count response. This study is done by collecting data from total of 2288 household heads of 16 randomly sampled districts in Hadiya and Kembata-Tembaro zones of Southern Ethiopia. The Poisson mixed models, as special cases of the generalized linear mixed model, is explored to determine effects of the predictors: age of household head, farm land size, and household size. Two ethnicities Hadiya and Kembata are included in the final model as dummy variables. Stepwise variable selection has indentified four predictors: age of head, farm land size, family size and dummy variable ethnic2 (0=other, 1=Kembata). These predictors are significant at 5% significance level with count response number of migrant. The Poisson mixed model consisting of the four predictors with random effects districts. Area specific random effects are significant with the variance of about 0.5105 and standard deviation of 0.7145. The results show that the number of migrant increases with heads age, family size, and farm land size. In conclusion, there is a significantly high number of international migration per household in the area. Age of household head, family size, and farm land size are determinants that increase the number of international migrant in households. Community-based intervention is needed so as to monitor and regulate the international migration for the benefits of the society.

Keywords: Poisson regression, GLM, number of migrant, Hadiya and Kembata Tembaro zones

Procedia PDF Downloads 272
4704 Analysing “The Direction of Artificial Intelligence Legislation from a Global Perspective” from the Perspective of “AIGC Copyright Protection” Content

Authors: Xiaochen Mu

Abstract:

Due to the diversity of stakeholders and the ambiguity of ownership boundaries, the current protection models for Artificial Intelligence Generated Content (AIGC) have many disadvantages. In response to this situation, there are three different protection models worldwide. The United States Copyright Office stipulates that works autonomously generated by artificial intelligence ‘lack’ the element of human creation, and non-human AI cannot create works. To protect and promote investment in the field of artificial intelligence, UK legislation, through Section 9(3) of the CDPA, designates the author of AI-generated works as ‘the person by whom the arrangements necessary for the creation of the work are undertaken.’ China neither simply excludes the work attributes of AI-generated content based on the lack of a natural person subject as the sole reason, nor does it generalize that AIGC should or should not be protected. Instead, it combines specific case circumstances and comprehensively evaluates the degree of originality of AIGC and the contributions of natural persons to AIGC. In China's first AI drawing case, the court determined that the image in question was the result of the plaintiff's design and selection through inputting prompt words and setting parameters, reflecting the plaintiff's intellectual investment and personalized expression, and should be recognized as a work in the sense of copyright law. Despite opposition, the ruling also established the feasibility of the AIGC copyright protection path. The recognition of the work attributes of AIGC will not lead to overprotection that hinders the overall development of the AI industry. Just as with the legislation and regulation of AI by various countries, there is a need for a balance between protection and development. For example, the provisional agreement reached on the EU AI Act, based on a risk classification approach, seeks a dynamic balance between copyright protection and the development of the AI industry.

Keywords: generative artificial intelligence, originality, works, copyright

Procedia PDF Downloads 21
4703 Neuropsychological Deficits in Drug-Resistant Epilepsy

Authors: Timea Harmath-Tánczos

Abstract:

Drug-resistant epilepsy (DRE) is defined as the persistence of seizures despite at least two syndrome-adapted antiseizure drugs (ASD) used at efficacious daily doses. About a third of patients with epilepsy suffer from drug resistance. Cognitive assessment has a crucial role in the diagnosis and clinical management of epilepsy. Previous studies have addressed the clinical targets and indications for measuring neuropsychological functions; best to our knowledge, no studies have examined it in a Hungarian therapy-resistant population. To fill this gap, we investigated the Hungarian diagnostic protocol between 18 and 65 years of age. This study aimed to describe and analyze neuropsychological functions in patients with drug-resistant epilepsy and identify factors associated with neuropsychology deficits. We perform a prospective case-control study comparing neuropsychological performances in 50 adult patients and 50 healthy individuals between March 2023 and July 2023. Neuropsychological functions were examined in both patients and controls using a full set of specific tests (general performance level, motor functions, attention, executive facts., verbal and visual memory, language, and visual-spatial functions). Potential risk factors for neuropsychological deficit were assessed in the patient group using a multivariate analysis. The two groups did not differ in age, sex, dominant hand and level of education. Compared with the control group, patients with drug-resistant epilepsy showed worse performance on motor functions and visuospatial memory, sustained attention, inhibition and verbal memory. Neuropsychological deficits could therefore be systematically detected in patients with drug-resistant epilepsy in order to provide neuropsychological therapy and improve quality of life. The analysis of the classical and complex indices of the special neuropsychological tasks presented in the presentation can help in the investigation of normal and disrupted memory and executive functions in the DRE.

Keywords: drug-resistant epilepsy, Hungarian diagnostic protocol, memory, executive functions, cognitive neuropsychology

Procedia PDF Downloads 60
4702 Correlation of Serum Apelin Level with Coronary Calcium Score in Patients with Suspected Coronary Artery Disease

Authors: M. Zeitoun, K. Abdallah, M. Rashwan

Abstract:

Introduction: A growing body of evidence indicates that apelin, a relatively recent member of the adipokines family, has a potential anti-atherogenic effect. An association between low serum apelin state and coronary artery disease (CAD) was previously reported; however, the relationship between apelin and the atherosclerotic burden was unclear. Objectives: Our aim was to explore the correlation of serum apelin level with coronary calcium score (CCS) as a quantitative marker of coronary atherosclerosis. Methods: This observational cross-sectional study enrolled 100 consecutive subjects referred for cardiac multi-detector computed tomography (MDCT) for assessment of CAD (mean age 54 ± 9.7 years, 51 male and 49 females). Clinical parameters, glycemic and lipid profile, high sensitivity CRP (hsCRP), homeostasis model assessment of insulin resistance (HOMA-IR), serum creatinine and complete blood count were assessed. Serum apelin levels were determined using a commercially available Enzyme Immunoassay (EIA) Kit. High-resolution non-contrast CT images were acquired by a 64-raw MDCT and CCS was calculated using the Agatston scoring method. Results: Forty-three percent of the studied subjects had positive coronary artery calcification (CAC). The mean CCS was 79 ± 196.5 Agatston units. Subjects with detectable CAC had significantly higher fasting plasma glucose, HbA1c, and WBCs count than subjects without detectable CAC (p < 0.05). Most importantly, subjects with detectable CAC had significantly lower serum apelin level than subjects without CAC (1.3 ± 0.4 ng/ml vs. 2.8 ± 0.6 ng/ml, p < 0.001). In addition, there was a statistically significant inverse correlation between serum apelin levels and CCS (r = 0.591, p < 0.001); on multivariate analysis this correlation was found to be independent of traditional cardiovascular risk factors and hs-CRP. Conclusion:To the best of our knowledge, this is the first report of an independent association between apelin and CCS in patients with suspected CAD. Apelin emerges as a possible novel biomarker for CAD, but this result remains to be proved prospectively.

Keywords: HbA1c, apelin, adipokines, coronary calcium score (CCS), coronary artery disease (CAD)

Procedia PDF Downloads 328
4701 Adsorption of Pb(II) with MOF [Co2(Btec)(Bipy)(DMF)2]N in Aqueous Solution

Authors: E. Gil, A. Zepeda, J. Rivera, C. Ben-Youssef, S. Rincón

Abstract:

Water pollution has become one of the most serious environmental problems. Multiple methods have been proposed for the removal of Pb(II) from contaminated water. Among these, adsorption processes have shown to be more efficient, cheaper and easier to handle with respect to other treatment methods. However, research for adsorbents with high adsorption capacities is still necessary. For this purpose, we proposed in this work the study of metal-organic Framework [Co2(btec)(bipy)(DMF)2]n (MOF-Co) as adsorbent material of Pb (II) in aqueous media. MOF-Co was synthesized by a simple method. Firstly 4, 4’ dipyridyl, 1,2,4,5 benzenetetracarboxylic acid, cobalt (II) and nitrate hexahydrate were first mixed each one in N,N dimethylformamide (DMF) and then, mixed in a reactor altogether. The obtained solution was heated at 363 K in a muffle during 68 h to complete the synthesis. It was washed and dried, obtaining MOF-Co as the final product. MOF-Co was characterized before and after the adsorption process by Fourier transforms infrared spectra (FTIR) and X-ray photoelectron spectroscopy (XPS). The Pb(II) in aqueous media was detected by Absorption Atomic Spectroscopy (AA). In order to evaluate the adsorption process in the presence of Pb(II) in aqueous media, the experiments were realized in flask of 100 ml the work volume at 200 rpm, with different MOF-Co quantities (0.0125 and 0.025 g), pH (2-6), contact time (0.5-6 h) and temperature (298,308 and 318 K). The kinetic adsorption was represented by pseudo-second order model, which suggests that the adsorption took place through chemisorption or chemical adsorption. The best adsorption results were obtained at pH 5. Langmuir, Freundlich and BET equilibrium isotherms models were used to study the adsorption of Pb(II) with 0.0125 g of MOF-Co, in the presence of different concentration of Pb(II) (20-200 mg/L, 100 mL, pH 5) with 4 h of reaction. The correlation coefficients (R2) of the different models show that the Langmuir model is better than Freundlich and BET model with R2=0.97 and a maximum adsorption capacity of 833 mg/g. Therefore, the Langmuir model can be used to best describe the Pb(II) adsorption in monolayer behavior on the MOF-Co. This value is the highest when compared to other materials such as the graphene/activated carbon composite (217 mg/g), biomass fly ashes (96.8 mg/g), PVA/PAA gel (194.99 mg/g) and MOF with Ag12 nanoparticles (120 mg/g).

Keywords: adsorption, heavy metals, metal-organic frameworks, Pb(II)

Procedia PDF Downloads 204
4700 Review of the Road Crash Data Availability in Iraq

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.

Keywords: road safety, Iraq, crash data, road risk assessment, The International Road Assessment Program (iRAP)

Procedia PDF Downloads 244
4699 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation

Authors: Samuel Ahamefula Mba

Abstract:

Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.

Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation

Procedia PDF Downloads 80
4698 Comprehensive Profiling and Characterization of Untargeted Extracellular Metabolites in Fermentation Processes: Insights and Advances in Analysis and Identification

Authors: Marianna Ciaccia, Gennaro Agrimi, Isabella Pisano, Maurizio Bettiga, Silvia Rapacioli, Giulia Mensa, Monica Marzagalli

Abstract:

Objective: Untargeted metabolomic analysis of extracellular metabolites is a powerful approach that focuses on comprehensively profiling in the extracellular space. In this study, we applied extracellular metabolomic analysis to investigate the metabolism of two probiotic microorganisms with health benefits that extend far beyond the digestive tract and the immune system. Methods: Analytical techniques employed in extracellular metabolomic analysis encompass various technologies, including mass spectrometry (MS), which enables the identification of metabolites present in the fermentation media, as well as the comparison of metabolic profiles under different experimental conditions. Multivariate statistical analysis techniques like principal component analysis (PCA) or partial least squares-discriminant analysis (PLS-DA) play a crucial role in uncovering metabolic signatures and understanding the dynamics of metabolic networks. Results: Different types of supernatants from fermentation processes, such as dairy-free, not dairy-free media and media with no cells or pasteurized, were subjected to metabolite profiling, which contained a complex mixture of metabolites, including substrates, intermediates, and end-products. This profiling provided insights into the metabolic activity of the microorganisms. The integration of advanced software tools has facilitated the identification and characterization of metabolites in different fermentation conditions and microorganism strains. Conclusions: In conclusion, untargeted extracellular metabolomic analysis, combined with software tools, allowed the study of the metabolites consumed and produced during the fermentation processes of probiotic microorganisms. Ongoing advancements in data analysis methods will further enhance the application of extracellular metabolomic analysis in fermentation research, leading to improved bioproduction and the advancement of sustainable manufacturing processes.

Keywords: biotechnology, metabolomics, lactic bacteria, probiotics, postbiotics

Procedia PDF Downloads 51
4697 Observed Changes in Constructed Precipitation at High Resolution in Southern Vietnam

Authors: Nguyen Tien Thanh, Günter Meon

Abstract:

Precipitation plays a key role in water cycle, defining the local climatic conditions and in ecosystem. It is also an important input parameter for water resources management and hydrologic models. With spatial continuous data, a certainty of discharge predictions or other environmental factors is unquestionably better than without. This is, however, not always willingly available to acquire for a small basin, especially for coastal region in Vietnam due to a low network of meteorological stations (30 stations) on long coast of 3260 km2. Furthermore, available gridded precipitation datasets are not fine enough when applying to hydrologic models. Under conditions of global warming, an application of spatial interpolation methods is a crucial for the climate change impact studies to obtain the spatial continuous data. In recent research projects, although some methods can perform better than others do, no methods draw the best results for all cases. The objective of this paper therefore, is to investigate different spatial interpolation methods for daily precipitation over a small basin (approximately 400 km2) located in coastal region, Southern Vietnam and find out the most efficient interpolation method on this catchment. The five different interpolation methods consisting of cressman, ordinary kriging, regression kriging, dual kriging and inverse distance weighting have been applied to identify the best method for the area of study on the spatio-temporal scale (daily, 10 km x 10 km). A 30-year precipitation database was created and merged into available gridded datasets. Finally, observed changes in constructed precipitation were performed. The results demonstrate that the method of ordinary kriging interpolation is an effective approach to analyze the daily precipitation. The mixed trends of increasing and decreasing monthly, seasonal and annual precipitation have documented at significant levels.

Keywords: interpolation, precipitation, trend, vietnam

Procedia PDF Downloads 269
4696 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality

Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy

Abstract:

Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.

Keywords: wilms’ tumour, nephroblastoma, urology, survival

Procedia PDF Downloads 56
4695 Effect of Goat Milk Kefir and Soy Milk Kefir on IL-6 in Diabetes Mellitus Wistar Mice Models Induced by Streptozotocin and Nicotinamide

Authors: Agatha Swasti Ayuning Tyas

Abstract:

Hyperglycemia in Diabetes Mellitus (DM) is an important factor in cellular and vascular damage, which is caused by activation of C Protein Kinase, polyol and hexosamine track, and production of Advanced Glycation End-Products (AGE). Those mentioned before causes the accumulation of Reactive Oxygen Species (ROS). Oxidative stress increases the expression of proinflammatory factors IL-6 as one of many signs of endothelial disfunction. Genistein in soy milk has a high immunomodulator potential. Goat milk contains amino acids which have antioxidative potential. Fermented kefir has an anti-inflammatory activity which believed will also contribute in potentiating goat milk and soy milk. This study is a quasi-experimental posttest-only research to 30 Wistar mice. This study compared the levels of IL-6 between healthy Wistar mice group (G1) and 4 DM Wistar mice with intervention and grouped as follows: mice without treatment (G2), mice treated with 100% goat milk kefir (G3), mice treated with combination of 50% goat milk kefir and 50% soy milk kefir (G4), and mice treated with 100% soy milk kefir (G5). DM animal models were induced with Streptozotocin & Nicotinamide to achieve hyperglycemic condition. Goat milk kefir and soy milk kefir are given at a dose of 2 mL/kg body weight/day for four weeks to intervention groups. Blood glucose was analyzed by the GOD-POD principle. IL-6 was analyzed by enzyme-linked sandwich ELISA. The level of IL-6 in DM untreated control group (G2) showed a significant difference from the group treated with the combination of 50% goat milk kefir and 50% soy milk kefir (G3) (p=0,006) and the group treated with 100% soy milk kefir (G5) (p=0,009). Whereas the difference of IL-6 in group treated with 100% goat milk kefir (G3) was not significant (p=0,131). There is also synergism between glucose level and IL-6 in intervention groups treated with combination of 50% goat milk kefir and 50% soy milk kefir (G3) and the group treated with 100% soy milk kefir (G5). Combination of 50 % goat milk kefir and 50% soy milk kefir and administration of 100% soy milk kefir alone can control the level of IL-6 remained low in DM Wistar mice induced with streptozocin and nicotinamide.

Keywords: diabetes mellitus, goat milk kefir, soy milk kefir, interleukin 6

Procedia PDF Downloads 272
4694 Determinants of Cessation of Exclusive Breastfeeding in Ankesha Guagusa Woreda, Awi Zone, Northwest Ethiopia: A Cross-Sectional Study

Authors: Tebikew Yeneabat, Tefera Belachew, Muluneh Haile

Abstract:

Background: Exclusive breast-feeding (EBF) is the practice of feeding only breast milk (including expressed breast milk) during the first six months and no other liquids and solid foods except medications. The time to cessation of exclusive breast-feeding, however, is different in different countries depending on different factors. Studies showed the risk of diarrhea morbidity and mortality is higher among none exclusive breast-feeding infants, common during starting other foods. However, there is no study that evaluated the time to cessation of exclusive breast-feeding in the study area. The aim of this study was to show time to cessation of EBF and its predictors among mothers of index infants less than twelve months old. Methods: We conducted a community-based cross-sectional study from February 13 to March 3, 2012 using both quantitative and qualitative methods. This study included a total of 592 mothers of index infant using multi-stage sampling method. Data were collected by using interviewer administered structured questionnaire. Bivariate and multivariate Cox regression analyses were performed. Results: Cessation of exclusive breast-feeding occurred in 392 (69.63%) cases. Among these, 224 (57.1%) happened before six months, while 145 (37.0%) and 23 (5.9%) occurred at six months and after six months of age of the index infant respectively. The median time for infants to stay on exclusive breast-feeding was 6.36 months in rural and 5.13 months in urban, and this difference was statistically significant on a Log rank (Cox-mantel) test. Maternal and paternal occupation, place of residence, postnatal counseling on exclusive breast-feeding, mode of delivery, and birth order of the index infant were significant predictors of cessation of exclusive breast-feeding. Conclusion: Providing postnatal care counseling on EBF, routine follow-up and support of those mothers having infants stressing for working mothers can bring about implementation of national strategy on infant and young child feeding.

Keywords: exclusive breastfeeding, cessation, median duration, Ankesha Guagusa Woreda

Procedia PDF Downloads 300
4693 Contact Phenomena in Medieval Business Texts

Authors: Carmela Perta

Abstract:

Among the studies flourished in the field of historical sociolinguistics, mainly in the strand devoted to English history, during its Medieval and early modern phases, multilingual texts had been analysed using theories and models coming from contact linguistics, thus applying synchronic models and approaches to the past. This is true also in the case of contact phenomena which would transcend the writing level involving the language systems implicated in contact processes to the point of perceiving a new variety. This is the case for medieval administrative-commercial texts in which, according to some Scholars, the degree of fusion of Anglo-Norman, Latin and middle English is so high a mixed code emerges, and there are recurrent patterns of mixed forms. Interesting is a collection of multilingual business writings by John Balmayn, an Englishman overseeing a large shipment in Tuscany, namely the Cantelowe accounts. These documents display various analogies with multilingual texts written in England in the same period; in fact, the writer seems to make use of the above-mentioned patterns, with Middle English, Latin, Anglo-Norman, and the newly added Italian. Applying an atomistic yet dynamic approach to the study of contact phenomena, we will investigate these documents, trying to explore the nature of the switching forms they contain from an intra-writer variation perspective. After analysing the accounts and the type of multilingualism in them, we will take stock of the assumed mixed code nature, comparing the characteristics found in this genre with modern assumptions. The aim is to evaluate the possibility to consider the switching forms as core elements of a mixed code, used as professional variety among merchant communities, or whether such texts should be analysed from a switching perspective.

Keywords: historical sociolinguistics, historical code switching, letters, medieval england

Procedia PDF Downloads 63
4692 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa

Authors: Adesuyi Ayodeji Steve, Zahn Munch

Abstract:

This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.

Keywords: change detection, land cover, modis, NDVI

Procedia PDF Downloads 385
4691 A Closed-Loop Design Model for Sustainable Manufacturing by Integrating Forward Design and Reverse Design

Authors: Yuan-Jye Tseng, Yi-Shiuan Chen

Abstract:

In this paper, a new concept of closed-loop design model is presented. The closed-loop design model is developed by integrating forward design and reverse design. Based on this new concept, a closed-loop design model for sustainable manufacturing by integrated evaluation of forward design, reverse design, and green manufacturing using a fuzzy analytic network process is developed. In the design stage of a product, with a given product requirement and objective, there can be different ways to design the detailed components and specifications. Therefore, there can be different design cases to achieve the same product requirement and objective. Thus, in the design evaluation stage, it is required to analyze and evaluate the different design cases. The purpose of this research is to develop a model for evaluating the design cases by integrated evaluation of forward design, reverse design, and green manufacturing models. A fuzzy analytic network process model is presented for integrated evaluation of the criteria in the three models. The comparison matrices for evaluating the criteria in the three groups are established. The total relational values among the three groups represent the total relational effects. In application, a super matrix can be created and the total relational values can be used to evaluate the design cases for decision-making to select the final design case. An example product is demonstrated in this presentation. It shows that the model is useful for integrated evaluation of forward design, reverse design, and green manufacturing to achieve a closed-loop design for sustainable manufacturing objective.

Keywords: design evaluation, forward design, reverse design, closed-loop design, supply chain management, closed-loop supply chain, fuzzy analytic network process

Procedia PDF Downloads 659
4690 Relationships between Screen Time, Internet Addiction and Other Lifestyle Behaviors with Obesity among Secondary School Students in the Turkish Republic of Northern Cyprus

Authors: Ozen Asut, Gulifeiya Abuduxike, Imge Begendi, Mustafa O. Canatan, Merve Colak, Gizem Ozturk, Lara Tasan, Ahmed Waraiet, Songul A. Vaizoglu, Sanda Cali

Abstract:

Obesity among children and adolescents is one of the critical public health problems worldwide. Internet addiction is one of the sedentary behaviors that cause obesity due to the excessive screen time and reduced physical activities. We aimed to examine the relationships between the screen time, internet addiction and other lifestyle behaviors with obesity among high school students in the Near East College in Nicosia, Northern Cyprus. A cross-sectional study conducted among 469 secondary school students, mean age 11.95 (SD, 0.81) years. A self-administrated questionnaire was applied to assess the screen time and lifestyle behaviors. The Turkish adopted version of short-form of internet addiction test was used to assess internet addiction problems. Height and weight were measured to calculate BMI and classified based on the BMI percentiles for sex and age. Descriptive analysis, Chi-Square test, and multivariate regression analysis were done. Of all, 17.2% of the participants were overweight and obese, and 18.1% had internet addictions, while 40.7% of them reported having screen time more than two hours. After adjusting the analysis for age and sex, eating snacks while watching television (OR, 3.04; 95% CI, 1.28-7.21), self- perceived body weight (OR, 24.9; 95% CI, 9.64-64.25) and having a play station in the room (OR, 4.6; 95% CI, 1.85 - 11.42) were significantly associated with obesity. Screen time (OR, 4.68; 95% CI, 2.61-8.38; p=0.000) and having a computer in bedroom (OR, 1.7; 95% CI, 1.01- 2.87; p=0.046) were significantly associated with internet addiction, whereas parent’s compliant regarding the lengthy technology use (OR, 0.23; 95% CI, 0.11-0.46; p=0.000) was found to be a protective factor against internet addiction. Prolonged screen time, internet addiction, sedentary lifestyles, and reduced physical and social activities are interrelated, multi-dimensional factors that lead to obesity among children and adolescents. A family - school-based integrated approach should be implemented to tackle obesity problems.

Keywords: adolescents, internet addiction, lifestyle, Northern Cyprus, obesity, screen time

Procedia PDF Downloads 130
4689 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 74
4688 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 54
4687 Evaluating the Terrace Benefits of Erosion in a Terraced-Agricultural Watershed for Sustainable Soil and Water Conservation

Authors: Sitarrine Thongpussawal, Hui Shao, Clark Gantzer

Abstract:

Terracing is a conservation practice to reduce erosion and widely used for soil and water conservation throughout the world but is relatively expensive. A modification of the Soil and Water Assessment Tool (called SWAT-Terrace or SWAT-T) explicitly aims to improve the simulation of the hydrological process of erosion from the terraces. SWAT-T simulates erosion from the terraces by separating terraces into three segments instead of evaluating the entire terrace. The objective of this work is to evaluate the terrace benefits on erosion from the Goodwater Creek Experimental Watershed (GCEW) at watershed and Hydrologic Response Unit (HRU) scales using SWAT-T. The HRU is the smallest spatial unit of the model, which lumps all similar land uses, soils, and slopes within a sub-basin. The SWAT-T model was parameterized for slope length, steepness and the empirical Universal Soil Erosion Equation support practice factor for three terrace segments. Data from 1993-2010 measured at the watershed outlet were used to evaluate the models for calibration and validation. Results of SWAT-T calibration showed good performance between measured and simulated erosion for the monthly time step, but poor performance for SWAT-T validation. This is probably because of large storms in spring 2002 that prevented planting, causing poorly simulated scheduling of actual field operations. To estimate terrace benefits on erosion, models were compared with and without terraces. Results showed that SWAT-T showed significant ~3% reduction in erosion (Pr <0.01) at the watershed scale and ~12% reduction in erosion at the HRU scale. Studies using the SWAT-T model indicated that the terraces have advantages to reduce erosion from terraced-agricultural watersheds. SWAT-T can be used in the evaluation of erosion to sustainably conserve the soil and water.

Keywords: Erosion, Modeling, Terraces, SWAT

Procedia PDF Downloads 188
4686 Age-Associated Seroprevalence of Toxoplasma gondii in 10892 Pregnant Women in Senegal between 2016 and 2019

Authors: Ndiaye Mouhamadou, Seck Abdoulaye, Ndiaye Babacar, Diallo Thierno Abdoulaye, Diop Abdou, Seck Mame Cheikh, Diongue Khadim, Badiane Aida Sadikh, Diallo Mamadou Alpha, Kouedvidjin Ekoué, Ndiaye Daouda

Abstract:

Background: Toxoplasmosis is a parasite disease that presents high rates of gestational and congenital infection worldwide and is therefore considered a public health problem and a neglected disease. The aim of this study was to determine the seroprevalence of toxoplasmosis in pregnant women referred to the medical biology laboratory of the Pasteur Institute of Dakar (Senegal) between January 2014 and December 2019. Methodology: This was a cross-sectional, descriptive, retrospective study of 10892 blood samples from pregnant women aged 16 to 46 years. The Architect toxo IgG/IgM from Abbot Laboratories, which is a chemiluminescent microparticle immunoassay (CMIA), was used for the quantitative determination of antibodies against Toxoplasma gondii in human serum. Results: In total, over a period from January 2014 to December 2019, 10892 requests for toxoplasmosis serology in pregnant women were included. The age of the patients included in our series ranged from 16 to 46 years. The mean age was 31.2 ± 5.72 years. The overall seroprevalence of T. gondii in pregnant women was estimated to be 28.9% [28.0-29.7]. In a multivariate logistic regression analysis, after adjustment for a covariate such as a study period, pregnant women aged 36-46 years were more likely to carry IgG antibodies to T. gondii than pregnant women younger than 36 years. Conclusion: T. gondii seroprevalence was significantly higher in pregnant women older than 36 years, leaving younger women more susceptible to primary T. gondii infection and their babies to congenital toxoplasmosis. There will be a need to increase awareness of the risk factors for toxoplasmosis and its different modes of transmission in these high-risk groups, but this should be supported by epidemiologic studies of the distribution of risk factors for toxoplasmosis in pregnant women and women of childbearing age.

Keywords: toxoplasmosis, pregnancy, seroprevalence, Senegal

Procedia PDF Downloads 118
4685 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 193
4684 Effect of Cumulative Dissipated Energy on Short-Term and Long-Term Outcomes after Uncomplicated Cataract Surgery

Authors: Palaniraj Rama Raj, Himeesh Kumar, Paul Adler

Abstract:

Purpose: To investigate the effect of ultrasound energy, expressed as cumulative dissipated energy (CDE), on short and long-term outcomes after uncomplicated cataract surgery by phacoemulsification. Methods: In this single-surgeon, two-center retrospective study, non-glaucomatous participants who underwent uncomplicated cataract surgery were investigated. Best-corrected visual acuity (BCVA) and intraocular pressure (IOP) were measured at 3 separate time points: pre-operative, Day 1 and ≥1 month. Anterior chamber (AC) inflammation and corneal odema (CO) were assessed at 2 separate time points: Pre-operative and Day 1. Short-term changes (Day 1) in BCVA, IOP, AC and CO and long-term changes (≥1 month) in BCVA and IOP were evaluated as a function of CDE using a multivariate multiple linear regression model, adjusting for age, gender, cataract type and grade, preoperative IOP, preoperative BCVA and duration of long-term follow-up. Results: 110 eyes from 97 non-glaucomatous participants were analysed. 60 (54.55%) were female and 50 (45.45%) were male. The mean (±SD) age was 73.40 (±10.96) years. Higher CDE counts were strongly associated with higher grades of sclerotic nuclear cataracts (p <0.001) and posterior subcapsular cataracts (p <0.036). There was no significant association between CDE counts and cortical cataracts. CDE counts also had a positive correlation with Day 1 CO (p <0.001). There was no correlation between CDE counts and Day 1 AC inflammation. Short-term and long-term changes in post-operative IOP did not demonstrate significant associations with CDE counts (all p >0.05). Though there was no significant correlation between CDE counts and short-term changes in BCVA, higher CDE counts were strongly associated with greater improvements in long-term BCVA (p = 0.011). Conclusion: Though higher CDE counts were strongly associated with higher grades of Day 1 postoperative CO, there appeared to be no detriment to long-term BCVA. Correspondingly, the strong positive correlation between CDE counts and long-term BCVA was likely reflective of the greater severity of underlying cataract type and grade. CDE counts were not associated with short-term or long-term postoperative changes in IOP.

Keywords: cataract surgery, phacoemulsification, cumulative dissipated energy, CDE, surgical outcomes

Procedia PDF Downloads 171
4683 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach

Authors: Zhuoran Li, Guan Qin

Abstract:

A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.

Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method

Procedia PDF Downloads 160
4682 The Impact of Cryptocurrency Classification on Money Laundering: Analyzing the Preferences of Criminals for Stable Coins, Utility Coins, and Privacy Tokens

Authors: Mohamed Saad, Huda Ismail

Abstract:

The purpose of this research is to examine the impact of cryptocurrency classification on money laundering crimes and to analyze how the preferences of criminals differ according to the type of digital currency used. Specifically, we aim to explore the roles of stablecoins, utility coins, and privacy tokens in facilitating or hindering money laundering activities and to identify the key factors that influence the choices of criminals in using these cryptocurrencies. To achieve our research objectives, we used a dataset for the most highly traded cryptocurrencies (32 currencies) that were published on the coin market cap for 2022. In addition to conducting a comprehensive review of the existing literature on cryptocurrency and money laundering, with a focus on stablecoins, utility coins, and privacy tokens, Furthermore, we conducted several Multivariate analyses. Our study reveals that the classification of cryptocurrency plays a significant role in money laundering activities, as criminals tend to prefer certain types of digital currencies over others, depending on their specific needs and goals. Specifically, we found that stablecoins are more commonly used in money laundering due to their relatively stable value and low volatility, which makes them less risky to hold and transfer. Utility coins, on the other hand, are less frequently used in money laundering due to their lack of anonymity and limited liquidity. Finally, privacy tokens, such as Monero and Zcash, are increasingly becoming a preferred choice among criminals due to their high degree of privacy and untraceability. In summary, our study highlights the importance of understanding the nuances of cryptocurrency classification in the context of money laundering and provides insights into the preferences of criminals in using digital currencies for illegal activities. Based on our findings, our recommendation to the policymakers is to address the potential misuse of cryptocurrencies for money laundering. By implementing measures to regulate stable coins, strengthening cross-border cooperation, fostering public-private partnerships, and increasing cooperation, policymakers can help prevent and detect money laundering activities involving digital currencies.

Keywords: crime, cryptocurrency, money laundering, tokens.

Procedia PDF Downloads 76
4681 Belief-Based Games: An Appropriate Tool for Uncertain Strategic Situation

Authors: Saied Farham-Nia, Alireza Ghaffari-Hadigheh

Abstract:

Game theory is a mathematical tool to study the behaviors of a rational and strategic decision-makers, that analyze existing equilibrium in interest conflict situation and provides an appropriate mechanisms for cooperation between two or more player. Game theory is applicable for any strategic and interest conflict situation in politics, management and economics, sociology and etc. Real worlds’ decisions are usually made in the state of indeterminacy and the players often are lack of the information about the other players’ payoffs or even his own, which leads to the games in uncertain environments. When historical data for decision parameters distribution estimation is unavailable, we may have no choice but to use expertise belief degree, which represents the strength with that we believe the event will happen. To deal with belief degrees, we have use uncertainty theory which is introduced and developed by Liu based on normality, duality, subadditivity and product axioms to modeling personal belief degree. As we know, the personal belief degree heavily depends on the personal knowledge concerning the event and when personal knowledge changes, cause changes in the belief degree too. Uncertainty theory not only theoretically is self-consistent but also is the best among other theories for modeling belief degree on practical problem. In this attempt, we primarily reintroduced Expected Utility Function in uncertainty environment according to uncertainty theory axioms to extract payoffs. Then, we employed Nash Equilibrium to investigate the solutions. For more practical issues, Stackelberg leader-follower Game and Bertrand Game, as a benchmark models are discussed. Compared to existing articles in the similar topics, the game models and solution concepts introduced in this article can be a framework for problems in an uncertain competitive situation based on experienced expert’s belief degree.

Keywords: game theory, uncertainty theory, belief degree, uncertain expected value, Nash equilibrium

Procedia PDF Downloads 403
4680 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique

Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari

Abstract:

Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.

Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis

Procedia PDF Downloads 153
4679 Day Ahead and Intraday Electricity Demand Forecasting in Himachal Region using Machine Learning

Authors: Milan Joshi, Harsh Agrawal, Pallaw Mishra, Sanand Sule

Abstract:

Predicting electricity usage is a crucial aspect of organizing and controlling sustainable energy systems. The task of forecasting electricity load is intricate and requires a lot of effort due to the combined impact of social, economic, technical, environmental, and cultural factors on power consumption in communities. As a result, it is important to create strong models that can handle the significant non-linear and complex nature of the task. The objective of this study is to create and compare three machine learning techniques for predicting electricity load for both the day ahead and intraday, taking into account various factors such as meteorological data and social events including holidays and festivals. The proposed methods include a LightGBM, FBProphet, combination of FBProphet and LightGBM for day ahead and Motifs( Stumpy) based on Mueens algorithm for similarity search for intraday. We utilize these techniques to predict electricity usage during normal days and social events in the Himachal Region. We then assess their performance by measuring the MSE, RMSE, and MAPE values. The outcomes demonstrate that the combination of FBProphet and LightGBM method is the most accurate for day ahead and Motifs for intraday forecasting of electricity usage, surpassing other models in terms of MAPE, RMSE, and MSE. Moreover, the FBProphet - LightGBM approach proves to be highly effective in forecasting electricity load during social events, exhibiting precise day ahead predictions. In summary, our proposed electricity forecasting techniques display excellent performance in predicting electricity usage during normal days and special events in the Himachal Region.

Keywords: feature engineering, FBProphet, LightGBM, MASS, Motifs, MAPE

Procedia PDF Downloads 59
4678 The Display of Environmental Information to Promote Energy Saving Practices: Evidence from a Massive Behavioral Platform

Authors: T. Lazzarini, M. Imbiki, P. E. Sutter, G. Borragan

Abstract:

While several strategies, such as the development of more efficient appliances, the financing of insulation programs or the rolling out of smart meters represent promising tools to reduce future energy consumption, their implementation relies on people’s decisions-actions. Likewise, engaging with consumers to reshape their behavior has shown to be another important way to reduce energy usage. For these reasons, integrating the human factor in the energy transition has become a major objective for researchers and policymakers. Digital education programs based on tangible and gamified user interfaces have become a new tool with potential effects to reduce energy consumption4. The B2020 program, developed by the firm “Économie d’Énergie SAS”, proposes a digital platform to encourage pro-environmental behavior change among employees and citizens. The platform integrates 160 eco-behaviors to help saving energy and water and reducing waste and CO2 emissions. A total of 13,146 citizens have used the tool so far to declare the range of eco-behaviors they adopt in their daily lives. The present work seeks to build on this database to identify the potential impact of adopted energy-saving behaviors (n=62) to reduce the use of energy in buildings. To this end, behaviors were classified into three categories regarding the nature of its implementation (Eco-habits: e.g., turning-off the light, Eco-actions: e.g., installing low carbon technology such as led light-bulbs and Home-Refurbishments: e.g., such as wall-insulation or double-glazed energy efficient windows). General Linear Models (GLM) disclosed the existence of a significantly higher frequency of Eco-habits when compared to the number of home-refurbishments realized by the platform users. While this might be explained in part by the high financial costs that are associated with home renovation works, it also contrasts with the up to three times larger energy-savings that can be accomplished by these means. Furthermore, multiple regression models failed to disclose the expected relationship between energy-savings and frequency of adopted eco behaviors, suggesting that energy-related practices are not necessarily driven by the correspondent energy-savings. Finally, our results also suggested that people adopting more Eco-habits and Eco-actions were more likely to engage in Home-Refurbishments. Altogether, these results fit well with a growing body of scientific research, showing that energy-related practices do not necessarily maximize utility, as postulated by traditional economic models, and suggest that other variables might be triggering them. Promoting home refurbishments could benefit from the adoption of complementary energy-saving habits and actions.

Keywords: energy-saving behavior, human performance, behavioral change, energy efficiency

Procedia PDF Downloads 181
4677 Sustainable Project Management: Driving the Construction Industry Towards Sustainable Developmental Goals

Authors: Francis Kwesi Bondinuba, Seidu Abdullah, Mewomo Cecilia, Opoku Alex

Abstract:

Purpose: The purpose of this research is to develop a framework for understanding how sustainable project management contributes to the construction industry's pursuit of sustainable development goals. Study design/methodology/approach: The study employed a theoretical methodology to review existing theories and models that support Sustainable Project Management (SPM) in the construction industry. Additionally, a comprehensive review of current literature on SPM is conducted to provide a thorough understanding of this study. Findings: Sustainable Project Management (SPM) practices, including stakeholder engagement and collaboration, resource efficiency, waste management, risk management, and resilience, play a crucial role in achieving the Sustainable Development Goals (SDGs) within the construction industry. Conclusion: Adopting Sustainable Project Management (SPM) practices in the Ghanaian construction industry enhances social inclusivity by engaging communities and creating job opportunities. The adoption of these practices faces significant challenges, including a lack of awareness and understanding, insufficient regulatory frameworks, financial constraints, and a shortage of skilled professionals. Recommendation: There should be a comprehensive approach to project planning and execution that includes stakeholders such as local communities, government bodies, and environmental organisations, the use of green building materials and technologies, and the implementation of effective waste management strategies, all of which will ensure the achievement of SDGs in Ghana's construction industry. Originality/value: This paper adds to the current literature by offering the various theories and models in Sustainable Project Management (SPM) and a detailed review of how Sustainable Project Management (SPM) contribute to the achievement of the Sustainable Development Goals (SDGs) in the Ghanaian Construction Industry.

Keywords: sustainable development, sustainable development goals, construction industry, ghana, sustainable project management

Procedia PDF Downloads 5
4676 Vulnerability Assessment of Vertically Irregular Structures during Earthquake

Authors: Pranab Kumar Das

Abstract:

Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.

Keywords: ductility, stress concentration, vertically irregular structure, vulnerability

Procedia PDF Downloads 220