Search results for: drying models
4801 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation
Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen
Abstract:
Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.Keywords: fire safety, evacuation, decision-making, at-risk groups
Procedia PDF Downloads 1054800 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 2924799 Designing a Model to Increase the Flow of Circular Economy Startups Using a Systemic and Multi-Generational Approach
Authors: Luís Marques, João Rocha, Andreia Fernandes, Maria Moura, Cláudia Caseiro, Filipa Figueiredo, João Nunes
Abstract:
The implementation of circularity strategies other than recycling, such as reducing the amount of raw material, as well as reusing or sharing existing products, remains marginal. The European Commission announced that the transition towards a more circular economy could lead to the net creation of about 700,000 jobs in Europe by 2030, through additional labour demand from recycling plants, repair services and other circular activities. Efforts to create new circular business models in accordance with completely circular processes, as opposed to linear ones, have increased considerably in recent years. In order to create a societal Circular Economy transition model, it is necessary to include innovative solutions, where startups play a key role. Early-stage startups based on new business models according to circular processes often face difficulties in creating enough impact. The StartUp Zero Program designs a model and approach to increase the flow of startups in the Circular Economy field, focusing on a systemic decision analysis and multi-generational approach, considering Multi-Criteria Decision Analysis to support a decision-making tool, which is also supported by the use of a combination of an Analytical Hierarchy Process and Multi-Attribute Value Theory methods. We define principles, criteria and indicators for evaluating startup prerogatives, quantifying the evaluation process in a unique result. Additionally, this entrepreneurship program spanning 16 months involved more than 2400 young people, from ages 14 to 23, in more than 200 interaction activities.Keywords: circular economy, entrepreneurship, startups;, multi-criteria decision analysis
Procedia PDF Downloads 1054798 Time-Dependent Modulation on Depressive Responses and Circadian Rhythms of Corticosterone in Models of Melatonin Deficit
Authors: Jana Tchekalarova, Milena Atanasova, Katerina Georgieva
Abstract:
Melatonin deficit can cause a disturbance in emotional status and circadian rhythms of the endocrine system in the body. Both pharmacological and alternative approaches are applied for correction of dysfunctions driven by changes in circadian dynamics of many physiological indicators. In the present study, we tested and compare the beneficial effect of agomelatine (40 mg/kg, i.p. for 3 weeks) and endurance training on depressive behavior in two models of melatonin deficit in rat. The role of disturbed circadian rhythms of plasma melatonin and corticosterone secretion in the mechanism of these treatments was also explored. The continuous exercise program attenuated depressive responses associated with disrupted diurnal rhythm of home-cage motor activity, anhedonia in the sucrose preference test, and despair-like behavior in the forced swimming test were attenuated by agomelatine exposed to chronic constant light (CCL) and long-term exercise in pinealectomized rats. Parallel to the observed positive effect on the emotional status, agomelatine restored CCL-induced impairment of circadian patterns of plasma melatonin but not that of corticosterone. In opposite, exercise training diminished total plasma corticosterone levels and corrected its flattened pattern while it was unable to correct melatonin deficit in pinealectomy. These results suggest that the antidepressant-like effect of pharmacological and alternative approach might be mediated via two different mechanism, correction of the disturbed circadian rhythm of melatonin and corticosterone, respectively. Therefore, these treatment approaches might have a potential therapeutic application in different subpopulations of people characterized by a melatonin deficiency. This work was supported by the National Science Fund of Bulgaria (research grant # № DN 03/10; DN# 12/6).Keywords: agomelatine, exercise training, melatonin deficit, corticosterone
Procedia PDF Downloads 1324797 Count Regression Modelling on Number of Migrants in Households
Authors: Tsedeke Lambore Gemecho, Ayele Taye Goshu
Abstract:
The main objective of this study is to identify the determinants of the number of international migrants in a household and to compare regression models for count response. This study is done by collecting data from total of 2288 household heads of 16 randomly sampled districts in Hadiya and Kembata-Tembaro zones of Southern Ethiopia. The Poisson mixed models, as special cases of the generalized linear mixed model, is explored to determine effects of the predictors: age of household head, farm land size, and household size. Two ethnicities Hadiya and Kembata are included in the final model as dummy variables. Stepwise variable selection has indentified four predictors: age of head, farm land size, family size and dummy variable ethnic2 (0=other, 1=Kembata). These predictors are significant at 5% significance level with count response number of migrant. The Poisson mixed model consisting of the four predictors with random effects districts. Area specific random effects are significant with the variance of about 0.5105 and standard deviation of 0.7145. The results show that the number of migrant increases with heads age, family size, and farm land size. In conclusion, there is a significantly high number of international migration per household in the area. Age of household head, family size, and farm land size are determinants that increase the number of international migrant in households. Community-based intervention is needed so as to monitor and regulate the international migration for the benefits of the society.Keywords: Poisson regression, GLM, number of migrant, Hadiya and Kembata Tembaro zones
Procedia PDF Downloads 2834796 Analysing “The Direction of Artificial Intelligence Legislation from a Global Perspective” from the Perspective of “AIGC Copyright Protection” Content
Authors: Xiaochen Mu
Abstract:
Due to the diversity of stakeholders and the ambiguity of ownership boundaries, the current protection models for Artificial Intelligence Generated Content (AIGC) have many disadvantages. In response to this situation, there are three different protection models worldwide. The United States Copyright Office stipulates that works autonomously generated by artificial intelligence ‘lack’ the element of human creation, and non-human AI cannot create works. To protect and promote investment in the field of artificial intelligence, UK legislation, through Section 9(3) of the CDPA, designates the author of AI-generated works as ‘the person by whom the arrangements necessary for the creation of the work are undertaken.’ China neither simply excludes the work attributes of AI-generated content based on the lack of a natural person subject as the sole reason, nor does it generalize that AIGC should or should not be protected. Instead, it combines specific case circumstances and comprehensively evaluates the degree of originality of AIGC and the contributions of natural persons to AIGC. In China's first AI drawing case, the court determined that the image in question was the result of the plaintiff's design and selection through inputting prompt words and setting parameters, reflecting the plaintiff's intellectual investment and personalized expression, and should be recognized as a work in the sense of copyright law. Despite opposition, the ruling also established the feasibility of the AIGC copyright protection path. The recognition of the work attributes of AIGC will not lead to overprotection that hinders the overall development of the AI industry. Just as with the legislation and regulation of AI by various countries, there is a need for a balance between protection and development. For example, the provisional agreement reached on the EU AI Act, based on a risk classification approach, seeks a dynamic balance between copyright protection and the development of the AI industry.Keywords: generative artificial intelligence, originality, works, copyright
Procedia PDF Downloads 424795 Application of Geotube® Method for Sludge Handling in Adaro Coal Mine
Authors: Ezman Fitriansyah, Lestari Diah Restu, Wawan
Abstract:
Adaro coal mine in South Kalimantan-Indonesia maintains catchment area of approximately 15,000 Ha for its mine operation. As an open pit surface coal mine with high erosion rate, the mine water in Adaro coal mine contains high TSS that needs to be treated before being released to rivers. For the treatment process, Adaro operates 21 Settling Ponds equipped with combination of physical and chemical system to separate solids and water to ensure the discharged water complied with regional environmental quality standards. However, the sludge created from the sedimentation process reduces the settling ponds capacity gradually. Therefore regular maintenance activities are required to recover and maintain the ponds' capacity. Trucking system and direct dredging had been the most common method to handle sludge in Adaro. But the main problem in applying these two methods is excessive area required for drying pond construction. To solve this problem, Adaro implements an alternative method called Geotube®. The principle of Geotube® method is the sludge contained in the Settling Ponds is pumped into Geotube® containers which have been designed to release water and retain mud flocks. During the pumping process, an amount of flocculants chemicals are injected into the sludge to form bigger mud flocks. Due to the difference in particle size, the mud flocks are settled in the container whilst the water continues to flow out through the container’s pores. Compared to the trucking system and direct dredging method, this method provides three advantages: space required to operate, increasing of overburden waste dump volume, and increasing of water treatment process speed and quality. Based on the evaluation result, Geotube® method only needs 1:8 of space required by the other methods. From the geotechnical assessment result conducted by Adaro, the potential loss of waste dump volume capacity prior to implementation of the Geotube® method was 26.7%. The water treatment process of TSS in well maintained ponds is 16% more optimum.Keywords: geotube, mine water, settling pond, sludge handling, wastewater treatment
Procedia PDF Downloads 2004794 Adsorption of Pb(II) with MOF [Co2(Btec)(Bipy)(DMF)2]N in Aqueous Solution
Authors: E. Gil, A. Zepeda, J. Rivera, C. Ben-Youssef, S. Rincón
Abstract:
Water pollution has become one of the most serious environmental problems. Multiple methods have been proposed for the removal of Pb(II) from contaminated water. Among these, adsorption processes have shown to be more efficient, cheaper and easier to handle with respect to other treatment methods. However, research for adsorbents with high adsorption capacities is still necessary. For this purpose, we proposed in this work the study of metal-organic Framework [Co2(btec)(bipy)(DMF)2]n (MOF-Co) as adsorbent material of Pb (II) in aqueous media. MOF-Co was synthesized by a simple method. Firstly 4, 4’ dipyridyl, 1,2,4,5 benzenetetracarboxylic acid, cobalt (II) and nitrate hexahydrate were first mixed each one in N,N dimethylformamide (DMF) and then, mixed in a reactor altogether. The obtained solution was heated at 363 K in a muffle during 68 h to complete the synthesis. It was washed and dried, obtaining MOF-Co as the final product. MOF-Co was characterized before and after the adsorption process by Fourier transforms infrared spectra (FTIR) and X-ray photoelectron spectroscopy (XPS). The Pb(II) in aqueous media was detected by Absorption Atomic Spectroscopy (AA). In order to evaluate the adsorption process in the presence of Pb(II) in aqueous media, the experiments were realized in flask of 100 ml the work volume at 200 rpm, with different MOF-Co quantities (0.0125 and 0.025 g), pH (2-6), contact time (0.5-6 h) and temperature (298,308 and 318 K). The kinetic adsorption was represented by pseudo-second order model, which suggests that the adsorption took place through chemisorption or chemical adsorption. The best adsorption results were obtained at pH 5. Langmuir, Freundlich and BET equilibrium isotherms models were used to study the adsorption of Pb(II) with 0.0125 g of MOF-Co, in the presence of different concentration of Pb(II) (20-200 mg/L, 100 mL, pH 5) with 4 h of reaction. The correlation coefficients (R2) of the different models show that the Langmuir model is better than Freundlich and BET model with R2=0.97 and a maximum adsorption capacity of 833 mg/g. Therefore, the Langmuir model can be used to best describe the Pb(II) adsorption in monolayer behavior on the MOF-Co. This value is the highest when compared to other materials such as the graphene/activated carbon composite (217 mg/g), biomass fly ashes (96.8 mg/g), PVA/PAA gel (194.99 mg/g) and MOF with Ag12 nanoparticles (120 mg/g).Keywords: adsorption, heavy metals, metal-organic frameworks, Pb(II)
Procedia PDF Downloads 2144793 Review of the Road Crash Data Availability in Iraq
Authors: Abeer K. Jameel, Harry Evdorides
Abstract:
Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.Keywords: road safety, Iraq, crash data, road risk assessment, The International Road Assessment Program (iRAP)
Procedia PDF Downloads 2564792 Enhanced Kinetic Solubility Profile of Epiisopiloturine Solid Solution in Hipromellose Phthalate
Authors: Amanda C. Q. M. Vieira, Cybelly M. Melo, Camila B. M. Figueirêdo, Giovanna C. R. M. Schver, Salvana P. M. Costa, Magaly A. M. de Lyra, Ping I. Lee, José L. Soares-Sobrinho, Pedro J. Rolim-Neto, Mônica F. R. Soares
Abstract:
Epiisopiloturine (EPI) is a drug candidate that is extracted from Pilocarpus microphyllus and isolated from the waste of Pilocarpine. EPI has demonstrated promising schistosomicidal, leishmanicide, anti-inflammatory and antinociceptive activities, according to in vitro studies that have been carried out since 2009. However, this molecule shows poor aqueous solubility, which represents a problem for the release of the drug candidate and its absorption by the organism. The purpose of the present study is to investigate the extent of enhancement of kinetic solubility of a solid solution (SS) of EPI in hipromellose phthalate HP-55 (HPMCP), an enteric polymer carrier. SS was obtained by the solvent evaporation methodology, using acetone/methanol (60:40) as solvent system. Both EPI and polymer (drug loading 10%) were dissolved in this solvent until a clear solution was obtained, and then dried in oven at 60ºC during 12 hours, followed by drying in a vacuum oven for 4 h. The results show a considerable modification in the crystalline structure of the drug candidate. For instance, X-ray diffraction (XRD) shows a crystalline behavior for the EPI, which becomes amorphous for the SS. Polarized light microscopy, a more sensitive technique than XRD, also shows completely absence of crystals in SS sample. Differential Scanning Calorimetric (DSC) curves show no signal of EPI melting point in SS curve, indicating, once more, no presence of crystal in this system. Interaction between the drug candidate and the polymer were found in Infrared microscopy, which shows a carbonyl 43.3 cm-1 band shift, indicating a moderate-strong interaction between them, probably one of the reasons to the SS formation. Under sink conditions (pH 6.8), EPI SS had its dissolution performance increased in 2.8 times when compared with the isolated drug candidate. EPI SS sample provided a release of more than 95% of the drug candidate in 15 min, whereas only 45% of EPI (alone) could be dissolved in 15 min and 70% in 90 min. Thus, HPMCP demonstrates to have a good potential to enhance the kinetic solubility profile of EPI. Future studies to evaluate the stability of SS are required to conclude the benefits of this system.Keywords: epiisopiloturine, hipromellose phthalate HP-55, pharmaceuticaltechnology, solubility
Procedia PDF Downloads 6074791 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 944790 Observed Changes in Constructed Precipitation at High Resolution in Southern Vietnam
Authors: Nguyen Tien Thanh, Günter Meon
Abstract:
Precipitation plays a key role in water cycle, defining the local climatic conditions and in ecosystem. It is also an important input parameter for water resources management and hydrologic models. With spatial continuous data, a certainty of discharge predictions or other environmental factors is unquestionably better than without. This is, however, not always willingly available to acquire for a small basin, especially for coastal region in Vietnam due to a low network of meteorological stations (30 stations) on long coast of 3260 km2. Furthermore, available gridded precipitation datasets are not fine enough when applying to hydrologic models. Under conditions of global warming, an application of spatial interpolation methods is a crucial for the climate change impact studies to obtain the spatial continuous data. In recent research projects, although some methods can perform better than others do, no methods draw the best results for all cases. The objective of this paper therefore, is to investigate different spatial interpolation methods for daily precipitation over a small basin (approximately 400 km2) located in coastal region, Southern Vietnam and find out the most efficient interpolation method on this catchment. The five different interpolation methods consisting of cressman, ordinary kriging, regression kriging, dual kriging and inverse distance weighting have been applied to identify the best method for the area of study on the spatio-temporal scale (daily, 10 km x 10 km). A 30-year precipitation database was created and merged into available gridded datasets. Finally, observed changes in constructed precipitation were performed. The results demonstrate that the method of ordinary kriging interpolation is an effective approach to analyze the daily precipitation. The mixed trends of increasing and decreasing monthly, seasonal and annual precipitation have documented at significant levels.Keywords: interpolation, precipitation, trend, vietnam
Procedia PDF Downloads 2754789 Effect of Goat Milk Kefir and Soy Milk Kefir on IL-6 in Diabetes Mellitus Wistar Mice Models Induced by Streptozotocin and Nicotinamide
Authors: Agatha Swasti Ayuning Tyas
Abstract:
Hyperglycemia in Diabetes Mellitus (DM) is an important factor in cellular and vascular damage, which is caused by activation of C Protein Kinase, polyol and hexosamine track, and production of Advanced Glycation End-Products (AGE). Those mentioned before causes the accumulation of Reactive Oxygen Species (ROS). Oxidative stress increases the expression of proinflammatory factors IL-6 as one of many signs of endothelial disfunction. Genistein in soy milk has a high immunomodulator potential. Goat milk contains amino acids which have antioxidative potential. Fermented kefir has an anti-inflammatory activity which believed will also contribute in potentiating goat milk and soy milk. This study is a quasi-experimental posttest-only research to 30 Wistar mice. This study compared the levels of IL-6 between healthy Wistar mice group (G1) and 4 DM Wistar mice with intervention and grouped as follows: mice without treatment (G2), mice treated with 100% goat milk kefir (G3), mice treated with combination of 50% goat milk kefir and 50% soy milk kefir (G4), and mice treated with 100% soy milk kefir (G5). DM animal models were induced with Streptozotocin & Nicotinamide to achieve hyperglycemic condition. Goat milk kefir and soy milk kefir are given at a dose of 2 mL/kg body weight/day for four weeks to intervention groups. Blood glucose was analyzed by the GOD-POD principle. IL-6 was analyzed by enzyme-linked sandwich ELISA. The level of IL-6 in DM untreated control group (G2) showed a significant difference from the group treated with the combination of 50% goat milk kefir and 50% soy milk kefir (G3) (p=0,006) and the group treated with 100% soy milk kefir (G5) (p=0,009). Whereas the difference of IL-6 in group treated with 100% goat milk kefir (G3) was not significant (p=0,131). There is also synergism between glucose level and IL-6 in intervention groups treated with combination of 50% goat milk kefir and 50% soy milk kefir (G3) and the group treated with 100% soy milk kefir (G5). Combination of 50 % goat milk kefir and 50% soy milk kefir and administration of 100% soy milk kefir alone can control the level of IL-6 remained low in DM Wistar mice induced with streptozocin and nicotinamide.Keywords: diabetes mellitus, goat milk kefir, soy milk kefir, interleukin 6
Procedia PDF Downloads 2854788 Dogmatic Analysis of Legal Risks of Using Artificial Intelligence: The European Union and Polish Perspective
Authors: Marianna Iaroslavska
Abstract:
ChatGPT is becoming commonplace. However, only a few people think about the legal risks of using Large Language Model in their daily work. The main dilemmas concern the following areas: who owns the copyright to what somebody creates through ChatGPT; what can OpenAI do with the prompt you enter; can you accidentally infringe on another creator's rights through ChatGPT; what about the protection of the data somebody enters into the chat. This paper will present these and other legal risks of using large language models at work using dogmatic methods and case studies. The paper will present a legal analysis of AI risks against the background of European Union law and Polish law. This analysis will answer questions about how to protect data, how to make sure you do not violate copyright, and what is at stake with the AI Act, which recently came into force in the EU. If your work is related to the EU area, and you use AI in your work, this paper will be a real goldmine for you. The copyright law in force in Poland does not protect your rights to a work that is created with the help of AI. So if you start selling such a work, you may face two main problems. First, someone may steal your work, and you will not be entitled to any protection because work created with AI does not have any legal protection. Second, the AI may have created the work by infringing on another person's copyright, so they will be able to claim damages from you. In addition, the EU's current AI Act imposes a number of additional obligations related to the use of large language models. The AI Act divides artificial intelligence into four risk levels and imposes different requirements depending on the level of risk. The EU regulation is aimed primarily at those developing and marketing artificial intelligence systems in the EU market. In addition to the above obstacles, personal data protection comes into play, which is very strictly regulated in the EU. If you violate personal data by entering information into ChatGPT, you will be liable for violations. When using AI within the EU or in cooperation with entities located in the EU, you have to take into account a lot of risks. This paper will highlight such risks and explain how they can be avoided.Keywords: EU, AI act, copyright, polish law, LLM
Procedia PDF Downloads 214787 Fabric Softener Deposition on Cellulose Nanocrystals and Cotton Fibers
Authors: Evdokia K. Oikonomou, Nikolay Christov, Galder Cristobal, Graziana Messina, Giovani Marletta, Laurent Heux, Jean-Francois Berret
Abstract:
Fabric softeners are aqueous formulations that contain ~10 wt. % double tailed cationic surfactants. Here, a formulation in which 50% surfactant was replaced with low quantities of natural guar polymers was developed. Thanks to the reduced surfactant quantity this product has less environmental impact while the guars presence was found to maintain the product’s performance. The objective of this work is to elucidate the effect of the guar polymers on the softener deposition and the adsorption mechanism on the cotton surface. The surfactants in these formulations are assembled into large distributed (0.1 – 1 µm) vesicles that are stable in the presence of guars and upon dilution. The effect of guars on the vesicles adsorption on cotton was first estimated by using cellulose nanocrystals (CNC) as a stand-in for cotton. The dispersion of CNC in water permits to follow the interaction between the vesicles, guars, and CNC in the bulk. It was found that guars enhance the deposition on CNC and that the vesicles are deposited intactly on the fibers driven by electrostatics. The mechanism of the vesicles/guars adsorption on cellulose fibers was identified by quartz crystal microbalance with dissipation monitoring. It was found that the guars increase the surfactant deposited quantity, in agreement with the results in the bulk. Also, the structure of the adsorbed surfactant on the fibers' surfaces (vesicle or bilayer) was influenced by the guars presence. Deposition studies on cotton fabrics were also conducted. Attenuated total reflection and scanning electron microscopy were used to study the effect of the polymers on this deposition. Finally, fluorescent microscopy was used to follow the adsorption of surfactant vesicles, labeled with a fluorescent dye, on cotton fabrics in water. It was found that, in the presence or not of polymers, the surfactant vesicles are adsorbed on fiber maintaining their vesicular structure in water (supported vesicular bilayer structure). The guars influence this process. However, upon drying the vesicles are transformed into bilayers and eventually wrap the fibers (supported lipid bilayer structure). This mechanism is proposed for the adsorption of vesicular conditioner on cotton fiber and can be affected by the presence of polymers.Keywords: cellulose nanocrystals, cotton fibers, fabric softeners, guar polymers, surfactant vesicles
Procedia PDF Downloads 1804786 Contact Phenomena in Medieval Business Texts
Authors: Carmela Perta
Abstract:
Among the studies flourished in the field of historical sociolinguistics, mainly in the strand devoted to English history, during its Medieval and early modern phases, multilingual texts had been analysed using theories and models coming from contact linguistics, thus applying synchronic models and approaches to the past. This is true also in the case of contact phenomena which would transcend the writing level involving the language systems implicated in contact processes to the point of perceiving a new variety. This is the case for medieval administrative-commercial texts in which, according to some Scholars, the degree of fusion of Anglo-Norman, Latin and middle English is so high a mixed code emerges, and there are recurrent patterns of mixed forms. Interesting is a collection of multilingual business writings by John Balmayn, an Englishman overseeing a large shipment in Tuscany, namely the Cantelowe accounts. These documents display various analogies with multilingual texts written in England in the same period; in fact, the writer seems to make use of the above-mentioned patterns, with Middle English, Latin, Anglo-Norman, and the newly added Italian. Applying an atomistic yet dynamic approach to the study of contact phenomena, we will investigate these documents, trying to explore the nature of the switching forms they contain from an intra-writer variation perspective. After analysing the accounts and the type of multilingualism in them, we will take stock of the assumed mixed code nature, comparing the characteristics found in this genre with modern assumptions. The aim is to evaluate the possibility to consider the switching forms as core elements of a mixed code, used as professional variety among merchant communities, or whether such texts should be analysed from a switching perspective.Keywords: historical sociolinguistics, historical code switching, letters, medieval england
Procedia PDF Downloads 754785 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa
Authors: Adesuyi Ayodeji Steve, Zahn Munch
Abstract:
This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.Keywords: change detection, land cover, modis, NDVI
Procedia PDF Downloads 4024784 A Closed-Loop Design Model for Sustainable Manufacturing by Integrating Forward Design and Reverse Design
Authors: Yuan-Jye Tseng, Yi-Shiuan Chen
Abstract:
In this paper, a new concept of closed-loop design model is presented. The closed-loop design model is developed by integrating forward design and reverse design. Based on this new concept, a closed-loop design model for sustainable manufacturing by integrated evaluation of forward design, reverse design, and green manufacturing using a fuzzy analytic network process is developed. In the design stage of a product, with a given product requirement and objective, there can be different ways to design the detailed components and specifications. Therefore, there can be different design cases to achieve the same product requirement and objective. Thus, in the design evaluation stage, it is required to analyze and evaluate the different design cases. The purpose of this research is to develop a model for evaluating the design cases by integrated evaluation of forward design, reverse design, and green manufacturing models. A fuzzy analytic network process model is presented for integrated evaluation of the criteria in the three models. The comparison matrices for evaluating the criteria in the three groups are established. The total relational values among the three groups represent the total relational effects. In application, a super matrix can be created and the total relational values can be used to evaluate the design cases for decision-making to select the final design case. An example product is demonstrated in this presentation. It shows that the model is useful for integrated evaluation of forward design, reverse design, and green manufacturing to achieve a closed-loop design for sustainable manufacturing objective.Keywords: design evaluation, forward design, reverse design, closed-loop design, supply chain management, closed-loop supply chain, fuzzy analytic network process
Procedia PDF Downloads 6764783 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach
Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam
Abstract:
Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment
Procedia PDF Downloads 844782 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements
Procedia PDF Downloads 664781 Evaluating the Terrace Benefits of Erosion in a Terraced-Agricultural Watershed for Sustainable Soil and Water Conservation
Authors: Sitarrine Thongpussawal, Hui Shao, Clark Gantzer
Abstract:
Terracing is a conservation practice to reduce erosion and widely used for soil and water conservation throughout the world but is relatively expensive. A modification of the Soil and Water Assessment Tool (called SWAT-Terrace or SWAT-T) explicitly aims to improve the simulation of the hydrological process of erosion from the terraces. SWAT-T simulates erosion from the terraces by separating terraces into three segments instead of evaluating the entire terrace. The objective of this work is to evaluate the terrace benefits on erosion from the Goodwater Creek Experimental Watershed (GCEW) at watershed and Hydrologic Response Unit (HRU) scales using SWAT-T. The HRU is the smallest spatial unit of the model, which lumps all similar land uses, soils, and slopes within a sub-basin. The SWAT-T model was parameterized for slope length, steepness and the empirical Universal Soil Erosion Equation support practice factor for three terrace segments. Data from 1993-2010 measured at the watershed outlet were used to evaluate the models for calibration and validation. Results of SWAT-T calibration showed good performance between measured and simulated erosion for the monthly time step, but poor performance for SWAT-T validation. This is probably because of large storms in spring 2002 that prevented planting, causing poorly simulated scheduling of actual field operations. To estimate terrace benefits on erosion, models were compared with and without terraces. Results showed that SWAT-T showed significant ~3% reduction in erosion (Pr <0.01) at the watershed scale and ~12% reduction in erosion at the HRU scale. Studies using the SWAT-T model indicated that the terraces have advantages to reduce erosion from terraced-agricultural watersheds. SWAT-T can be used in the evaluation of erosion to sustainably conserve the soil and water.Keywords: Erosion, Modeling, Terraces, SWAT
Procedia PDF Downloads 2074780 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2054779 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 1724778 Studies on the Effect of Dehydration Techniques, Treatments, Packaging Material and Methods on the Quality of Buffalo Meat during Ambient Temperature Storage
Authors: Tariq Ahmad Safapuri, Saghir Ahmad, Farhana Allai
Abstract:
The present study was conducted to evaluate the effect dehydration techniques (polyhouse and tray drying), different treatment (SHMP, SHMP+ salt, salt + turmeric), different packaging material (HDPE, combination film), and different packaging methods (air, vacuum, CO2 Flush) on quality of dehydrated buffalo meat during ambient temperature storage. The quality measuring parameters included physico-chemical characteristics i.e. pH, rehydration ratio, moisture content and microbiological characteristics viz total plate content. It was found that the treatment of (SHMP, SHMP + salt, salt + turmeric increased the pH. Moisture Content of dehydrated meat samples were found in between 7.20% and 5.54%.the rehydration ratio of salt+ turmeric treated sample was found to be highest and lowest for controlled meat sample. the bacterial count log TPC/g of salt + turmeric and tray dried was lowest i.e. 1.80.During ambient temperature storage ,there was no considerable change in pH of dehydrated sample till 150 days. however the moisture content of samples increased in different packaging system in different manner. The highest moisture rise was found in case of controlled meat sample HDPE/air packed while the lowest increase was reported for SHMP+ Salt treated Packed by vacuum in combination film packed sample. Rehydration ratio was found considerably affected in case of HDPE and air packed sample dehydrated in polyhouse after 150 days of ambient storage. While there was a very little change in the rehydration ratio of meat samples packed in combination film CO2 flush system. The TPC was found under safe limit even after 150 days of storage. The microbial count was found to be lowest for salt+ turmeric treated samples after 150 days of storage.Keywords: ambient temperature, dehydration technique, rehydration ratio, SHMP (sodium hexa meta phosphate), HDPE (high density polyethelene)
Procedia PDF Downloads 4184777 Belief-Based Games: An Appropriate Tool for Uncertain Strategic Situation
Authors: Saied Farham-Nia, Alireza Ghaffari-Hadigheh
Abstract:
Game theory is a mathematical tool to study the behaviors of a rational and strategic decision-makers, that analyze existing equilibrium in interest conflict situation and provides an appropriate mechanisms for cooperation between two or more player. Game theory is applicable for any strategic and interest conflict situation in politics, management and economics, sociology and etc. Real worlds’ decisions are usually made in the state of indeterminacy and the players often are lack of the information about the other players’ payoffs or even his own, which leads to the games in uncertain environments. When historical data for decision parameters distribution estimation is unavailable, we may have no choice but to use expertise belief degree, which represents the strength with that we believe the event will happen. To deal with belief degrees, we have use uncertainty theory which is introduced and developed by Liu based on normality, duality, subadditivity and product axioms to modeling personal belief degree. As we know, the personal belief degree heavily depends on the personal knowledge concerning the event and when personal knowledge changes, cause changes in the belief degree too. Uncertainty theory not only theoretically is self-consistent but also is the best among other theories for modeling belief degree on practical problem. In this attempt, we primarily reintroduced Expected Utility Function in uncertainty environment according to uncertainty theory axioms to extract payoffs. Then, we employed Nash Equilibrium to investigate the solutions. For more practical issues, Stackelberg leader-follower Game and Bertrand Game, as a benchmark models are discussed. Compared to existing articles in the similar topics, the game models and solution concepts introduced in this article can be a framework for problems in an uncertain competitive situation based on experienced expert’s belief degree.Keywords: game theory, uncertainty theory, belief degree, uncertain expected value, Nash equilibrium
Procedia PDF Downloads 4154776 Day Ahead and Intraday Electricity Demand Forecasting in Himachal Region using Machine Learning
Authors: Milan Joshi, Harsh Agrawal, Pallaw Mishra, Sanand Sule
Abstract:
Predicting electricity usage is a crucial aspect of organizing and controlling sustainable energy systems. The task of forecasting electricity load is intricate and requires a lot of effort due to the combined impact of social, economic, technical, environmental, and cultural factors on power consumption in communities. As a result, it is important to create strong models that can handle the significant non-linear and complex nature of the task. The objective of this study is to create and compare three machine learning techniques for predicting electricity load for both the day ahead and intraday, taking into account various factors such as meteorological data and social events including holidays and festivals. The proposed methods include a LightGBM, FBProphet, combination of FBProphet and LightGBM for day ahead and Motifs( Stumpy) based on Mueens algorithm for similarity search for intraday. We utilize these techniques to predict electricity usage during normal days and social events in the Himachal Region. We then assess their performance by measuring the MSE, RMSE, and MAPE values. The outcomes demonstrate that the combination of FBProphet and LightGBM method is the most accurate for day ahead and Motifs for intraday forecasting of electricity usage, surpassing other models in terms of MAPE, RMSE, and MSE. Moreover, the FBProphet - LightGBM approach proves to be highly effective in forecasting electricity load during social events, exhibiting precise day ahead predictions. In summary, our proposed electricity forecasting techniques display excellent performance in predicting electricity usage during normal days and special events in the Himachal Region.Keywords: feature engineering, FBProphet, LightGBM, MASS, Motifs, MAPE
Procedia PDF Downloads 724775 The Display of Environmental Information to Promote Energy Saving Practices: Evidence from a Massive Behavioral Platform
Authors: T. Lazzarini, M. Imbiki, P. E. Sutter, G. Borragan
Abstract:
While several strategies, such as the development of more efficient appliances, the financing of insulation programs or the rolling out of smart meters represent promising tools to reduce future energy consumption, their implementation relies on people’s decisions-actions. Likewise, engaging with consumers to reshape their behavior has shown to be another important way to reduce energy usage. For these reasons, integrating the human factor in the energy transition has become a major objective for researchers and policymakers. Digital education programs based on tangible and gamified user interfaces have become a new tool with potential effects to reduce energy consumption4. The B2020 program, developed by the firm “Économie d’Énergie SAS”, proposes a digital platform to encourage pro-environmental behavior change among employees and citizens. The platform integrates 160 eco-behaviors to help saving energy and water and reducing waste and CO2 emissions. A total of 13,146 citizens have used the tool so far to declare the range of eco-behaviors they adopt in their daily lives. The present work seeks to build on this database to identify the potential impact of adopted energy-saving behaviors (n=62) to reduce the use of energy in buildings. To this end, behaviors were classified into three categories regarding the nature of its implementation (Eco-habits: e.g., turning-off the light, Eco-actions: e.g., installing low carbon technology such as led light-bulbs and Home-Refurbishments: e.g., such as wall-insulation or double-glazed energy efficient windows). General Linear Models (GLM) disclosed the existence of a significantly higher frequency of Eco-habits when compared to the number of home-refurbishments realized by the platform users. While this might be explained in part by the high financial costs that are associated with home renovation works, it also contrasts with the up to three times larger energy-savings that can be accomplished by these means. Furthermore, multiple regression models failed to disclose the expected relationship between energy-savings and frequency of adopted eco behaviors, suggesting that energy-related practices are not necessarily driven by the correspondent energy-savings. Finally, our results also suggested that people adopting more Eco-habits and Eco-actions were more likely to engage in Home-Refurbishments. Altogether, these results fit well with a growing body of scientific research, showing that energy-related practices do not necessarily maximize utility, as postulated by traditional economic models, and suggest that other variables might be triggering them. Promoting home refurbishments could benefit from the adoption of complementary energy-saving habits and actions.Keywords: energy-saving behavior, human performance, behavioral change, energy efficiency
Procedia PDF Downloads 2004774 Sustainable Project Management: Driving the Construction Industry Towards Sustainable Developmental Goals
Authors: Francis Kwesi Bondinuba, Seidu Abdullah, Mewomo Cecilia, Opoku Alex
Abstract:
Purpose: The purpose of this research is to develop a framework for understanding how sustainable project management contributes to the construction industry's pursuit of sustainable development goals. Study design/methodology/approach: The study employed a theoretical methodology to review existing theories and models that support Sustainable Project Management (SPM) in the construction industry. Additionally, a comprehensive review of current literature on SPM is conducted to provide a thorough understanding of this study. Findings: Sustainable Project Management (SPM) practices, including stakeholder engagement and collaboration, resource efficiency, waste management, risk management, and resilience, play a crucial role in achieving the Sustainable Development Goals (SDGs) within the construction industry. Conclusion: Adopting Sustainable Project Management (SPM) practices in the Ghanaian construction industry enhances social inclusivity by engaging communities and creating job opportunities. The adoption of these practices faces significant challenges, including a lack of awareness and understanding, insufficient regulatory frameworks, financial constraints, and a shortage of skilled professionals. Recommendation: There should be a comprehensive approach to project planning and execution that includes stakeholders such as local communities, government bodies, and environmental organisations, the use of green building materials and technologies, and the implementation of effective waste management strategies, all of which will ensure the achievement of SDGs in Ghana's construction industry. Originality/value: This paper adds to the current literature by offering the various theories and models in Sustainable Project Management (SPM) and a detailed review of how Sustainable Project Management (SPM) contribute to the achievement of the Sustainable Development Goals (SDGs) in the Ghanaian Construction Industry.Keywords: sustainable development, sustainable development goals, construction industry, ghana, sustainable project management
Procedia PDF Downloads 244773 Vulnerability Assessment of Vertically Irregular Structures during Earthquake
Authors: Pranab Kumar Das
Abstract:
Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.Keywords: ductility, stress concentration, vertically irregular structure, vulnerability
Procedia PDF Downloads 2294772 Programmatic Actions of Social Welfare State in Service to Justice: Law, Society and the Third Sector
Authors: Bruno Valverde Chahaira, Matheus Jeronimo Low Lopes, Marta Beatriz Tanaka Ferdinandi
Abstract:
This paper proposes to dissect the meanings and / or directions of the State, in order, to present the State models to elaborate a conceptual framework about its function in the legal scope. To do so, it points out the possible contracts established between the State and the Society, since the general principles immanent in them can guide the models of society in force. From this orientation arise the contracts, whose purpose is by the effect to modify the status (the being and / or the opinion) of each of the subjects in presence - State and Society. In this logic, this paper announces the fiduciary contracts and “veredicção”(portuguese word) contracts, from the perspective of semiotics discourse (or greimasian). Therefore, studies focus on the issue of manifest language in unilateral and bilateral or reciprocal relations between the State and Society. Thus, under the biases of the model of the communicative situation and discourse, the guidelines of these contractual relations will be analyzed in order to see if there is a pragmatic sanction: positive when the contract is signed between the subjects (reward), or negative when the contract between they are broken (punishment). In this way, a third path emerges which, in this specific case, passes through the subject-third sector. In other words, the proposal, which is systemic in nature, is to analyze whether, since the contract of the welfare state is not carried out in the constitutional program on fundamental rights: education, health, housing, an others. Therefore, in the structure of the exchange demanded by the society according to its contractual obligations (others), the third way (Third Sector) advances in the empty space left by the State. In this line, it presents the modalities of action of the third sector in the social scope. Finally, the normative communication organization of these three subjects is sought in the pragmatic model of discourse, namely: State, Society and Third Sector, in an attempt to understand the constant dynamics in the Law and in the language of the relations established between them.Keywords: access to justice, state, social rights, third sector
Procedia PDF Downloads 145