Search results for: generalized extreme values
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8421

Search results for: generalized extreme values

1761 Improved Regression Relations Between Different Magnitude Types and the Moment Magnitude in the Western Balkan Earthquake Catalogue

Authors: Anila Xhahysa, Migena Ceyhan, Neki Kuka, Klajdi Qoshi, Damiano Koxhaj

Abstract:

The seismic event catalog has been updated in the framework of a bilateral project supported by the Central European Investment Fund and with the extensive support of Global Earthquake Model Foundation to update Albania's national seismic hazard model. The earthquake catalogue prepared within this project covers the Western Balkan area limited by 38.0° - 48°N, 12.5° - 24.5°E and includes 41,806 earthquakes that occurred in the region between 510 BC and 2022. Since the moment magnitude characterizes the earthquake size accurately and the selected ground motion prediction equations for the seismic hazard assessment employ this scale, it was chosen as the uniform magnitude scale for the catalogue. Therefore, proxy values of moment magnitude had to be obtained by using new magnitude conversion equations between the local and other magnitude types to this unified scale. The Global Centroid Moment Tensor Catalogue was considered the most authoritative for moderate to large earthquakes for moment magnitude reports; hence it was used as a reference for calibrating other sources. The best fit was observed when compared to some regional agencies, whereas, with reports of moment magnitudes from Italy, Greece and Turkey, differences were observed in all magnitude ranges. For teleseismic magnitudes, to account for the non-linearity of the relationships, we used the exponential model for the derivation of the regression equations. The obtained regressions for the surface wave magnitude and short-period body-wave magnitude show considerable differences with Global Earthquake Model regression curves, especially for low magnitude ranges. Moreover, a conversion relation was obtained between the local magnitude of Albania and the corresponding moment magnitude as reported by the global and regional agencies. As errors were present in both variables, the Deming regression was used.

Keywords: regression, seismic catalogue, local magnitude, tele-seismic magnitude, moment magnitude

Procedia PDF Downloads 68
1760 Class Control Management Issues and Solutions in Interactive Learning Theories’ Efficiency and the Application Case Study: 3rd Year Primary School

Authors: Mohammed Belalia Douma

Abstract:

Interactive learning is considered as the most effective strategy of learning, it is an educational philosophy based on the learner's contribution and involvement mainly in classroom and how he interacts toward his small society “classroom”, and the level of his collaboration into challenge, discovering, games, participation, all these can be provided through the interactive learning, which aims to activate the learner's role in the operation of learning, which focuses on research and experimentation, and the learner's self-reliance in obtaining information, acquiring skills, and forming values and attitudes. Whereas not based on memorization only, but rather on developing thinking and the ability to solve problems, on teamwork and collaborative learning. With the exchange or roles - teacher to student- , when the student will be more active and performing operations more than the student under the interactive learning method; we might face a several issues dealing with class controlling management, noise, and stability of learning… etc. This research paper is observing the application of the interactive learning on reality “classroom” and answers several assumptions and analyzes the issues coming up of these strategies mainly: noise, class control…etc The research sample was about 150 student of the 3rd year primary school in “Chlef” district, Algeria, level: beginners in the range of age 08 to 10 years old . We provided a questionnaire of confidential fifteen questions and also analyzing the attitudes of learners during three months. it have witnessed as teachers a variety of strategies dealing with applying the interactive learning but with a different issues; time management, noise, uncontrolled classes, overcrowded classes. Finally, it summed up that although the active education is an inevitably effective method of teaching, however, there are drawbacks to this, in addition to the fact that not all theoretical strategies can be applied and we conclude with solutions of this case study.

Keywords: interactive learning, student, learners, strategies.

Procedia PDF Downloads 57
1759 Study of the Physicochemical Characteristics of Liquid Effluents from the El Jadida Wastewater Treatment Plant

Authors: Aicha Assal, El Mostapha Lotfi

Abstract:

Rapid industrialization and population growth are currently the main causes of energy and environmental problems associated with wastewater treatment. Wastewater treatment plants (WWTPs) aim to treat wastewater before discharging it into the environment, but they are not yet capable of treating non-biodegradable contaminants such as heavy metals. Toxic heavy metals can disrupt biological processes in WWTPs. Consequently, it is crucial to combine additional physico-chemical treatments with WWTPs to ensure effective wastewater treatment. In this study, the authors examined the pretreatment process for urban wastewater generated by the El Jadida WWTP in order to assess its treatment efficiency. Various physicochemical and spatiotemporal parameters of the WWTP's raw and treated water were studied, including temperature, pH, conductivity, biochemical oxygen demand (BOD5), chemical oxygen demand (COD), suspended solids (SS), total nitrogen, and total phosphorus. The results showed an improvement in treatment yields, with measured performance values of 77% for BOD5, 63% for COD, and 66% for TSS. However, spectroscopic analyses revealed persistent coloration in wastewater samples leaving the WWTP, as well as the presence of heavy metals such as Zn, cadmium, chromium, and cobalt, detected by inductively coupled plasma optical emission spectroscopy (ICP-OES). To remedy these staining problems and reduce the presence of heavy metals, a new low-cost, environmentally-friendly eggshell-based solution was proposed. This method eliminated most heavy metals such as cobalt, beryllium, silver, and copper and significantly reduced the amount of cadmium, lead, chromium, manganese, aluminium, and Zn. In addition, the bioadsorbent was able to decolorize wastewater by up to 84%. This adsorption process is, therefore, of great interest for ensuring the quality of wastewater and promoting its reuse in irrigation.

Keywords: WWTP, wastewater, heavy metals, decoloration, depollution, COD, BOD5

Procedia PDF Downloads 62
1758 Decarboxylation of Waste Coconut Oil and Comparison of Acid Values

Authors: Pabasara H. Gamage, Sisira K. Weliwegamage, Sameera R. Gunatilake, Hondamuni I. C De Silva, Parakrama Karunaratne

Abstract:

Green diesel is an upcoming category of biofuels, which has more practical advantages than biodiesel. Production of green diesel involves production of hydrocarbons from various fatty acid sources. Though green diesel is chemically similar to fossil fuel hydrocarbons, it is more environmentally friendly. Decarboxylation of fatty acid sources is one of green diesel production methods and is less expensive and more energy efficient compared to hydrodeoxygenation. Free fatty acids (FFA), undergo decarboxylation readily than triglycerides. Waste coconut oil, which is a rich source of FFA, can be easily decarboxylated than other oils which have lower FFA contents. These free fatty acids can be converted to hydrocarbons by decarboxylation. Experiments were conducted to carry out decarboxylation of waste coconut oil in a high pressure hastealloy reactor (Toption Goup LTD), in the presence of soda lime and mixtures of soda lime and alumina. Acid value (AV) correlates to the amount of FFA available in a sample of oil. It can be shown that with the decreasing of AV, FFAs have converted to hydrocarbons. First, waste coconut oil was reacted with soda lime alone, at 150 °C, 200 °C, and 250 °C and 1.2 MPa pressure for 2 hours. AVs of products at different temperatures were compared. AV of products decreased with increasing temperature. Thereafter, different mixtures of soda lime and alumina (100% Soda lime, 1:1 soda lime and alumina and 100% alumina) were employed at temperatures 150 °C, 200 °C, and 250 °C and 1.2 MPa pressure. The lowest AV of 2.99±0.03 was obtained when 1:1 soda lime and alumina were employed at 250 °C. It can be concluded with respect to the AV that the amount of FFA decreased when decarboxylation temperature was increased. Soda lime:alumina 1:1 mixture showed the lowest AV among the compositions studied. These findings lead to formulate a method to successfully synthesize hydrocarbons by decarboxylating waste coconut oil in the presence of soda lime and alumina (1:1) at elevated tempertaures such as 250 °C.

Keywords: acid value, free fatty acids, green diesel, high pressure reactor, waste coconut oil

Procedia PDF Downloads 298
1757 Neuropsychology of Dyslexia and Rehabilitation Approaches: A Research Study Applied to School Aged Children with Reading Disorders in Greece

Authors: Rozi Laskaraki, Argyris Karapetsas, Aikaterini Karapetsa

Abstract:

This paper is focused on the efficacy of a rehabilitation program based on musical activities, implied to a group of school-aged dyslexic children. Objective: The purpose of this study was to investigate the efficacy of auditory training including musical exercises in children with developmental dyslexia (DD). Participants and Methods: 45 third-, and fourth-grade students with DD and a matched control group (n=45) were involved in this study. In the beginning, students participated in a clinical assessment, including both electrophysiological (i.e., event related potentials (ERPs) esp.P300 waveform) and neuropsychological tests, being conducted in Laboratory of Neuropsychology, at University of Thessaly, in Volos, Greece. Initial assessment’s results confirmed statistically significant lower performance for children with DD, compared to that of the typical readers. After clinical assessment, a subgroup of children with dyslexia was submitted to a music auditory training program, conducted in 45-minute training sessions, once a week, for twenty weeks. The program included structured and digitized musical activities involving pitch, rhythm, melody and tempo perception and discrimination as well as auditory sequencing. After the intervention period, children underwent a new recording of ERPs. Results: The electrophysiological results revealed that children had similar P300 latency values to that of the controls, after the remediation program; thus children overcame their deficits. Conclusion: The outcomes of the current study suggest that ERPs is a valid clinical tool in neuropsychological assessment settings and dyslexia can be ameliorated through music auditory training.

Keywords: dyslexia, event related potentials, learning disabilities, music, rehabilitation

Procedia PDF Downloads 145
1756 Architectural Approaches to a Sustainable Community with Floating Housing Units Adapting to Climate Change and Sea Level Rise in Vietnam

Authors: Nguyen Thi Thu Trang

Abstract:

Climate change and sea level rise is one of the greatest challenges facing human beings in the 21st century. Because of sea level rise, several low-lying coastal areas around the globe are at risk of being completely submerged, disappearing under water. Particularly in Viet Nam, the rise in sea level is predicted to result in more frequent and even permanently inundated coastal plains. As a result, land reserving fund of coastal cities is going to be narrowed in near future, while construction ground is becoming increasingly limited due to a rapid growth in population. Faced with this reality, the solutions are being discussed not only in tradition view such as accommodation is raised or moved to higher areas, or “living with the water”, but also forwards to “living on the water”. Therefore, the concept of a sustainable floating community with floating houses based on the precious value of long term historical tradition of water dwellings in Viet Nam would be a sustainable solution for adaptation of climate change and sea level rise in the coastal areas. The sustainable floating community is comprised of sustainability in four components: architecture, environment, socio-economic and living quality. This research paper is focused on sustainability in architectural component of floating community. Through detailed architectural analysis of current floating houses and floating communities in Viet Nam, this research not only accumulates precious values of traditional architecture that need to be preserved and developed in the proposed concept, but also illustrates its weaknesses that need to address for optimal design of the future sustainable floating communities. Based on these studies the research would provide guidelines with appropriate architectural solutions for the concept of sustainable floating community with floating housing units that are adapted to climate change and sea level rise in Viet Nam.

Keywords: guidelines, sustainable floating community, floating houses, Vietnam

Procedia PDF Downloads 516
1755 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries

Authors: Nessreen Y. Ibrahim

Abstract:

Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.

Keywords: developing countries, economic models, future design, possible futures

Procedia PDF Downloads 265
1754 Gamipulation: Exploring Covert Manipulation Through Gamification in the Context of Education

Authors: Aguiar-Castillo Lidia, Perez-Jimenez Rafael

Abstract:

The integration of gamification in educational settings aims to enhance student engagement and motivation through game design elements in learning activities. This paper introduces "Gamipulation," the subtle manipulation of students via gamification techniques serving hidden agendas without explicit consent. It highlights the need to distinguish between beneficial and exploitative uses of gamification in education, focusing on its potential to psychologically manipulate students for purposes misaligned with their best interests Through a literature review and expert interviews, this study presents a conceptual framework outlining gamipulation's features. It examines ethical concerns like gradually introducing desired behaviors, using distraction to divert attention from significant learning objectives, immediacy of rewards fostering short-term engagement over long-term learning, infantilization of students, and exploitation of emotional responses over reflective thinking. Additionally, it discusses ethical issues in collecting and utilizing student data within gamified environments. Key findings suggest that while gamification can enhance motivation and engagement, there's a fine line between ethical motivation and unethical manipulation. The study emphasizes the importance of transparency, respect for student autonomy, and alignment with educational values in gamified systems. It calls for educators and designers to be aware of gamification's manipulative potential and strive for ethical implementation that benefits students. In conclusion, this paper provides a framework for educators and researchers to understand and address gamipulation's ethical challenges. It encourages developing ethical guidelines and practices to ensure gamification in education remains a tool for positive engagement and learning rather than covert manipulation.

Keywords: gradualness, distraction, immediacy, infantilization, emotion

Procedia PDF Downloads 26
1753 Body Types of Softball Players in the 39th National Games of Thailand

Authors: Nopadol Nimsuwan, Sumet Prom-in

Abstract:

The purpose of this study was to investigate the body types, size, and body compositions of softball players in the 39th National Games of Thailand. The population of this study was 352 softball players who participated in the 39th National Games of Thailand from which a sample size of 291 was determined using the Taro Yamane formula and selection is made with stratified sampling method. The data collected were weight, height, arm length, leg length, chest circumference, mid-upper arm circumference, calf circumference, subcutaneous fat in the upper arm area, the scapula bone area, above the pelvis area, and mid-calf area. Keys and Brozek formula was used to calculate the fat quantity, Kitagawa formula to calculate the muscle quantity, and Heath and Carter method was used to determine the values of body dimensions. The results of the study can be concluded as follows. The average body dimensions of the male softball players were the endo-mesomorph body type while the average body dimensions of female softball players were the meso-endomorph body type. When considered according to the softball positions, it was found that the male softball players in every position had the endo-mesomorph body type while the female softball players in every position had the meso-endomorph body type except for the center fielder that had the endo-ectomorph body type. The endo-mesomorph body type is suitable for male softball players, and the meso-endomorph body type is suitable for female softball players because these body types are suitable for the five basic softball skills which are: gripping, throwing, catching, hitting, and base running. Thus, people related to selecting softball players to play in sports competitions of different levels should consider factors in terms of body type, size, and body components of the players.

Keywords: body types, softball players, national games of Thailand, social sustainability

Procedia PDF Downloads 484
1752 Duration of the Disease in Systemic Sclerosis and Efficiency of Rituximab Therapy

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva

Abstract:

Objectives: The duration of the disease could be one of the leading factors in the effectiveness of therapy in systemic sclerosis (SSc). The aim of the study was to assess how the duration of the disease affects the changes of lung function in patients(pts) with interstitial lung disease (ILD) associated with SSc during long-term RTX therapy. Methods: We prospectively included 113pts with SSc in this study. 85% of pts were female. Mean age was 48.1±13years. The diffuse cutaneous subset of the disease had 62pts, limited–40, overlap–11. The mean disease duration was 6.1±5.4years. Pts were divided into 2 groups depending on the disease duration - group 1 (less than 5 years-63pts) and group 2 (more than 5 years-50 pts). All pts received prednisolone at mean dose of 11.5±4.6 mg/day and 53 of them - immunosuppressants at inclusion. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3mo (point 1), 42±14mo (point 2) and 79±6.5mo (point 3) after initiation of RTX therapy. Cumulative mean dose of RTX in group 1 at point 1 was 1.7±0.6 g, at point 2 = 3.3±1.5g, at point 3 = 3.9±2.3g; in group 2 at point 1 = 1.6±0.6g, at point 2 = 2.7±1.5 g, at point 3 = 3.7±2.6 g. The results are presented in the form of mean values, delta(Δ), median(me), upper and lower quartile. Results. There was a significant increase of forced vital capacity % predicted (FVC) in both groups, but at points 1 and 2 the improvement was more significant in group 1. In group 2, an improvement of FVC was noted with a longer follow-up. Diffusion capacity for carbon monoxide % predicted (DLCO) remained stable at point 1, and then significantly improved by the 3rd year of RTX therapy in both groups. In group 1 at point 1: ΔFVC was 4.7 (me=4; [-1.8;12.3])%, ΔDLCO = -1.2 (me=-0.3; [-5.3;3.6])%, at point 2: ΔFVC = 9.4 (me=7.1; [1;16])%, ΔDLCO =3.7 (me=4.6; [-4.8;10])%, at point 3: ΔFVC = 13 (me=13.4; [2.3;25.8])%, ΔDLCO = 2.3 (me=1.6; [-5.6;11.5])%. In group 2 at point 1: ΔFVC = 3.4 (me=2.3; [-0.8;7.9])%, ΔDLCO = 1.5 (me=1.5; [-1.9;4.9])%; at point 2: ΔFVC = 7.6 (me=8.2; [0;12.6])%, ΔDLCO = 3.5 (me=0.7; [-1.6;10.7]) %; at point 3: ΔFVC = 13.2 (me=10.4; [2.8;15.4])%, ΔDLCO = 3.6 (me=1.7; [-2.4;9.2])%. Conclusion: Patients with an early SSc have more quick response to RTX therapy already in 1 year of follow-up. Patients with a disease duration more than 5 years also have response to therapy, but with longer treatment. RTX is effective option for the treatment of ILD-SSc, regardless of the duration of the disease.

Keywords: interstitial lung disease, systemic sclerosis, rituximab, disease duration

Procedia PDF Downloads 22
1751 Multi-source Question Answering Framework Using Transformers for Attribute Extraction

Authors: Prashanth Pillai, Purnaprajna Mangsuli

Abstract:

Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.

Keywords: natural language processing, deep learning, transformers, information retrieval

Procedia PDF Downloads 192
1750 Clinical and Epidemiological Profile of Patients with Chronic Obstructive Pulmonary Disease in a Medical Institution from the City of Medellin, Colombia

Authors: Camilo Andres Agudelo-Velez, Lina María Martinez-Sanchez, Natalia Perilla-Hernandez, Maria De Los Angeles Rodriguez-Gazquez, Felipe Hernandez-Restrepo, Dayana Andrea Quintero-Moreno, Camilo Ruiz-Mejia, Isabel Cristina Ortiz-Trujillo, Monica Maria Zuluaga-Quintero

Abstract:

Chronic obstructive pulmonary disease is common condition, characterized by a persistent blockage of airflow, partially reversible and progressive, that represents 5% of total deaths around the world, and it is expected to become the third leading cause of death by 2030. Objective: To establish the clinical and epidemiological profile of patients with chronic obstructive pulmonary disease in a medical institution from the city of Medellin, Colombia. Methods: A cross-sectional study was performed, with a sample of 50 patients with a diagnosis of chronic obstructive pulmonary disease in a private institution in Medellin, during 2015. The software SPSS vr. 20 was used for the statistical analysis. For the quantitative variables, averages, standard deviations, and maximun and minimun values were calculated, while for ordinal and nominal qualitative variables, proportions were estimated. Results: The average age was 73.5±9.3 years, 52% of the patients were women, 50% of them had retired, 46% ere married and 80% lived in the city of Medellín. The mean time of diagnosis was 7.8±1.3 years and 100% of the patients were treated at the internal medicine service. The most common clinical features were: 36% were classified as class D for the disease, 34% had a FEV1 <30%, 88% had a history of smoking and 52% had oxygen therapy at home. Conclusion: It was found that class D was the most common, and the majority of the patients had a history of smoking, indicating the need to strengthen promotion and prevention strategies in this regard.

Keywords: pulmonary disease, chronic obstructive, pulmonary medicine, oxygen inhalation therapy

Procedia PDF Downloads 441
1749 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case

Authors: Arzu K. Kamberli, Tolga Ulusoy

Abstract:

Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.

Keywords: econophysics, financial crisis, Gauss's Law, physics

Procedia PDF Downloads 153
1748 Effect of Mixture of Flaxseed and Pumpkin Seeds Powder on Hypercholesterolemia

Authors: Zahra Ashraf

Abstract:

Flax and pumpkin seeds are a rich source of unsaturated fatty acids, antioxidants and fiber, known to have anti-atherogenic properties. Hypercholesterolemia is a state characterized by the elevated level of cholesterol in the blood. This research was designed to study the effect of flax and pumpkin seeds powder mixture on hypercholesterolemia and body weight. Rat’s species were selected as human representative. Thirty male albino rats were divided into three groups: a control group, a CD-chol group (control diet+cholesterol) fed with 1.5% cholesterol and FP-chol group (flaxseed and pumpkin seed powder+ cholesterol) fed with 1.5% cholesterol. Flax and pumpkin seed powder mixed at proportion of (5/1) (omega-3 and omega-6). Blood samples were collected to examine lipid profile and body weight was also measured. Thus the data was subjected to analysis of variance. In CD-chol group, body weight, total cholesterol TC, triacylglycerides TG in plasma, plasma LDL-C, ratio significantly increased with a decrease in plasma HDL (good cholesterol). In FP-chol group lipid parameters and body weights were decreased significantly with an increase in HDL and decrease in LDL (bad cholesterol). The mean values of body weight, total cholesterol, triglycerides, low density lipoprotein and high density lipoproteins in FP-chol group were 240.66±11.35g, 59.60±2.20mg/dl, 50.20±1.79 mg/dl, 36.20±1.62mg/dl, 36.40±2.20 mg/dl, respectively. Flaxseed and pumpkin seeds powder mixture showed reduction in body weight, serum cholesterol, low density lipoprotein and triglycerides. While significant increase was shown in high density lipoproteins when given to hypercholesterolemic rats. Our results suggested that flax and pumpkin seed mixture has hypocholesterolemic effects which were probably mediated by polyunsaturated fatty acids (omega-3 and omega-6) present in seed mixture.

Keywords: hypercolesterolemia, omega 3 and omega 6 fatty acids, cardiovascular diseases

Procedia PDF Downloads 416
1747 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index

Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad

Abstract:

Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.

Keywords: indicators, aggregation, principle component analysis, weighting, index score

Procedia PDF Downloads 155
1746 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 215
1745 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery

Abstract:

Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465

Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI

Procedia PDF Downloads 231
1744 Balanced Scorecard (BSC) Project : A Methodological Proposal for Decision Support in a Corporate Scenario

Authors: David de Oliveira Costa, Miguel Ângelo Lellis Moreira, Carlos Francisco Simões Gomes, Daniel Augusto de Moura Pereira, Marcos dos Santos

Abstract:

Strategic management is a fundamental process for global companies that intend to remain competitive in an increasingly dynamic and complex market. To do so, it is necessary to maintain alignment with their principles and values. The Balanced Scorecard (BSC) proposes to ensure that the overall business performance is based on different perspectives (financial, customer, internal processes, and learning and growth). However, relying solely on the BSC may not be enough to ensure the success of strategic management. It is essential that companies also evaluate and prioritize strategic projects that need to be implemented to ensure they are aligned with the business vision and contribute to achieving established goals and objectives. In this context, the proposition involves the incorporation of the SAPEVO-M multicriteria method to indicate the degree of relevance between different perspectives. Thus, the strategic objectives linked to these perspectives have greater weight in the classification of structural projects. Additionally, it is proposed to apply the concept of the Impact & Probability Matrix (I&PM) to structure and ensure that strategic projects are evaluated according to their relevance and impact on the business. By structuring the business's strategic management in this way, alignment and prioritization of projects and actions related to strategic planning are ensured. This ensures that resources are directed towards the most relevant and impactful initiatives. Therefore, the objective of this article is to present the proposal for integrating the BSC methodology, the SAPEVO-M multicriteria method, and the prioritization matrix to establish a concrete weighting of strategic planning and obtain coherence in defining strategic projects aligned with the business vision. This ensures a robust decision-making support process.

Keywords: MCDA process, prioritization problematic, corporate strategy, multicriteria method

Procedia PDF Downloads 80
1743 Roundabout Implementation Analyses Based on Traffic Microsimulation Model

Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković

Abstract:

Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.

Keywords: before-after analysis, operating speed, capacity, design.

Procedia PDF Downloads 21
1742 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process

Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei

Abstract:

Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, emulsifying salt, rheology, texture

Procedia PDF Downloads 430
1741 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data

Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos

Abstract:

Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.

Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia

Procedia PDF Downloads 20
1740 Groundwater Investigation Using Resistivity Method and Drilling for Irrigation during the Dry Season in Lwantonde District, Uganda

Authors: Tamale Vincent

Abstract:

Groundwater investigation is the investigation of underground formations to understand the hydrologic cycle, known groundwater occurrences, and identify the nature and types of aquifers. There are different groundwater investigation methods and surface geophysical method is one of the groundwater investigation more especially the Geoelectrical resistivity Schlumberger configuration method which provides valuable information regarding the lateral and vertical successions of subsurface geomaterials in terms of their individual thickness and corresponding resistivity values besides using surface geophysical method, hydrogeological and geological investigation methods are also incorporated to aid in preliminary groundwater investigation. Investigation for groundwater in lwantonde district has been implemented. The area project is located cattle corridor and the dry seasonal troubles the communities in lwantonde district of which 99% of people living there are farmers, thus making agriculture difficult and local government to provide social services to its people. The investigation was done using the Geoelectrical resistivity Schlumberger configuration method. The measurement point is located in the three sub-counties, with a total of 17 measurement points. The study location is at 0025S, 3110E, and covers an area of 160 square kilometers. Based on the results of the Geoelectrical information data, it was found two types of aquifers, which are open aquifers in depth ranging from six meters to twenty-two meters and a confined aquifer in depth ranging from forty-five meters to eighty meters. In addition to the Geoelectrical information data, drilling was done at an accessible point by heavy equipment in the Lwakagura village, Kabura sub-county. At the drilling point, artesian wells were obtained at a depth of eighty meters and can rise to two meters above the soil surface. The discovery of artesian well is then used by residents to meet the needs of clean water and for irrigation considering that in this area most wells contain iron content.

Keywords: artesian well, geoelectrical, lwantonde, Schlumberger

Procedia PDF Downloads 124
1739 Energy Options and Environmental Impacts of Carbon Dioxide Utilization Pathways

Authors: Evar C. Umeozor, Experience I. Nduagu, Ian D. Gates

Abstract:

The energy requirements of carbon dioxide utilization (CDU) technologies/processes are diverse, so also are their environmental footprints. This paper explores the energy and environmental impacts of systems for CO₂ conversion to fuels, chemicals, and materials. Energy needs of the technologies and processes deployable in CO₂ conversion systems are met by one or combinations of hydrogen (chemical), electricity, heat, and light. Likewise, the environmental footprint of any CO₂ utilization pathway depends on the systems involved. So far, evaluation of CDU systems has been constrained to particular energy source/type or a subset of the overall system needed to make CDU possible. This introduces limitations to the general understanding of the energy and environmental implications of CDU, which has led to various pitfalls in past studies. A CDU system has an energy source, CO₂ supply, and conversion units. We apply a holistic approach to consider the impacts of all components in the process, including various sources of energy, CO₂ feedstock, and conversion technologies. The electricity sources include nuclear power, renewables (wind and solar PV), gas turbine, and coal. Heat is supplied from either electricity or natural gas, and hydrogen is produced from either steam methane reforming or electrolysis. The CO₂ capture unit uses either direct air capture or post-combustion capture via amine scrubbing, where applicable, integrated configurations of the CDU system are explored. We demonstrate how the overall energy and environmental impacts of each utilization pathway are obtained by aggregating the values for all components involved. Proper accounting of the energy and emission intensities of CDU must incorporate total balances for the utilization process and differences in timescales between alternative conversion pathways. Our results highlight opportunities for the use of clean energy sources, direct air capture, and a number of promising CO₂ conversion pathways for producing methanol, ethanol, synfuel, urea, and polymer materials.

Keywords: carbon dioxide utilization, processes, energy options, environmental impacts

Procedia PDF Downloads 145
1738 Synthetic Bis(2-Pyridylmethyl)Amino-Chloroacetyl Chloride- Ethylenediamine-Grafted Graphene Oxide Sheets Combined with Magnetic Nanoparticles: Remove Metal Ions and Catalytic Application

Authors: Laroussi Chaabane, Amel El Ghali, Emmanuel Beyou, Mohamed Hassen V. Baouab

Abstract:

In this research, the functionalization of graphene oxide sheets by ethylenediamine (EDA) was accomplished and followed by the grafting of bis(2-pyridylmethyl) amino group (BPED) onto the activated graphene oxide sheets in the presence of chloroacetylchloride (CAC) and then combined with magnetic nanoparticles (Fe₃O₄NPs) to produce a magnetic graphene-based composite [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. The physicochemical properties of [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] composites were investigated by Fourier transform infrared (FT-IR), scanning electron microscopy (SEM), X-ray diffraction (XRD), thermogravimetric analysis (TGA). Additionally, the catalysts can be easily recycled within ten seconds by using an external magnetic field. Moreover, [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] was used for removing Cu(II) ions from aqueous solutions using a batch process. The effect of pH, contact time and temperature on the metal ions adsorption were investigated, however weakly dependent on ionic strength. The maximum adsorption capacity values of Cu(II) on the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] at the pH of 6 is 3.46 mmol.g⁻¹. To examine the underlying mechanism of the adsorption process, pseudo-first, pseudo-second-order, and intraparticle diffusion models were fitted to experimental kinetic data. Results showed that the pseudo-second-order equation was appropriate to describe the Cu (II) adsorption by [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. Adsorption data were further analyzed by the Langmuir, Freundlich, and Jossens adsorption approaches. Additionally, the adsorption properties of the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED], their reusability (more than 6 cycles) and durability in the aqueous solutions open the path to removal of Cu(II) from water solution. Based on the results obtained, we report the activity of Cu(II) supported on [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] as a catalyst for the cross-coupling of symmetric alkynes.

Keywords: graphene, magnetic nanoparticles, adsorption kinetics/isotherms, cross coupling

Procedia PDF Downloads 138
1737 Fatty Acid Structure and Composition Effects of Biodiesel on Its Oxidative Stability

Authors: Gelu Varghese, Khizer Saeed

Abstract:

Biodiesel is as a mixture of mono-alkyl esters of long chain fatty acids derived from vegetable oils or animal fats. Recent studies in the literature suggest that end property of biodiesel such as its oxidative stability (OS) is highly influenced by the structure and composition of its alkyl esters than by environmental conditions. The structure and composition of these long chain fatty acid components have been also associated with trends in Cetane number, heat of combustion, cold flow properties viscosity, and lubricity. In the present work, detailed investigation has been carried out to decouple and correlate the fatty acid structure indices of biodiesel such as degree of unsaturation, chain length, double bond orientation, and composition with its oxidative stability. Measurements were taken using the EN14214 established Rancimat oxidative stability test method (EN141120). Firstly, effects of the degree of unsaturation, chain length and bond orientation were tested for the pure fatty acids to establish their oxidative stability. Results for pure Fatty acid show that Saturated FAs are more stable than unsaturated ones to oxidation; superior oxidative stability can be achieved by blending biodiesel fuels with relatively high in saturated fatty acid contents. A lower oxidative stability is noticed when a greater quantity of double bonds is present in the methyl ester. A strong inverse relationship with the number of double bonds and the Rancimat IP values can be identified. Trans isomer Methyl elaidate shows superior stability to oxidation than its cis isomer methyl oleate (7.2 vs. 2.3). Secondly, the effects of the variation in the composition of the biodiesel were investigated and established. Finally, biodiesels with varying structure and composition were investigated and correlated.

Keywords: biodiesel, fame, oxidative stability, fatty acid structure, acid composition

Procedia PDF Downloads 285
1736 3D Biomechanical Analysis in Shot Put Techniques of International Throwers

Authors: Satpal Yadav, Ashish Phulkar, Krishna K. Sahu

Abstract:

Aim: The research aims at doing a 3 Dimension biomechanical analysis in the shot put techniques of International throwers to evaluate the performance. Research Method: The researcher adopted the descriptive method and the data was subjected to calculate by using Pearson’s product moment correlation for the correlation of the biomechanical parameters with the performance of shot put throw. In all the analyses, the 5% critical level (p ≤ 0.05) was considered to indicate statistical significance. Research Sample: Eight (N=08) international shot putters using rotational/glide technique in male category was selected as subjects for the study. The researcher used the following methods and tools to obtain reliable measurements the instrument which was used for the purpose of present study namely the tesscorn slow-motion camera, specialized motion analyzer software, 7.260 kg Shot Put (for a male shot-putter) and steel tape. All measurement pertaining to the biomechanical variables was taken by the principal investigator so that data collected for the present study was considered reliable. Results: The finding of the study showed that negative significant relationship between the angular velocity right shoulder, acceleration distance at pre flight (-0.70), (-0.72) respectively were obtained, the angular displacement of knee, angular velocity right shoulder and acceleration distance at flight (0.81), (0.75) and (0.71) respectively were obtained, the angular velocity right shoulder and acceleration distance at transition phase (0.77), (0.79) respectively were obtained and angular displacement of knee, angular velocity right shoulder, release velocity shot, angle of release, height of release, projected distance and measured distance as the values (0.76), (0.77), (-0.83), (-0.79), (-0.77), (0.99) and (1.00) were found higher than the tabulated value at 0.05 level of significance. On the other hand, there exists an insignificant relationship between the performance of shot put and acceleration distance [m], angular displacement shot, C.G at release and horizontal release distance on the technique of shot put.

Keywords: biomechanics, analysis, shot put, international throwers

Procedia PDF Downloads 186
1735 Fragility Analysis of a Soft First-Story Building in Mexico City

Authors: Rene Jimenez, Sonia E. Ruiz, Miguel A. Orellana

Abstract:

On 09/19/2017, a Mw = 7.1 intraslab earthquake occurred in Mexico causing the collapse of about 40 buildings. Many of these were 5- or 6-story buildings with soft first story; so, it is desirable to perform a structural fragility analysis of typical structures representative of those buildings and to propose a reliable structural solution. Here, a typical 5-story building constituted by regular R/C moment-resisting frames in the first story and confined masonry walls in the upper levels, similar to the collapsed structures on the 09/19/2017 Mexico earthquake, is analyzed. Three different structural solutions of the 5-story building are considered: S1) it is designed in accordance with the Mexico City Building Code-2004; S2) then, the column dimensions of the first story corresponding to S1 are reduced, and S3) viscous dampers are added at the first story of solution S2. A number of dynamic incremental analyses are performed for each structural solution, using a 3D structural model. The hysteretic behavior model of the masonry was calibrated with experiments performed at the Laboratory of Structures at UNAM. Ten seismic ground motions are used to excite the structures; they correspond to ground motions recorded in intermediate soil of Mexico City with a dominant period around 1s, where the structures are located. The fragility curves of the buildings are obtained for different values of the maximum inter-story drift demands. Results show that solutions S1 and S3 give place to similar probabilities of exceedance of a given value of inter-story drift for the same seismic intensity, and that solution S2 presents a higher probability of exceedance for the same seismic intensity and inter-story drift demand. Therefore, it is concluded that solution S3 (which corresponds to the building with soft first story and energy dissipation devices) can be a reliable solution from the structural point of view.

Keywords: demand hazard analysis, fragility curves, incremental dynamic analyzes, soft-first story, structural capacity

Procedia PDF Downloads 177
1734 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey

Authors: Mahdiyeh Zafaranchi

Abstract:

With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.

Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters

Procedia PDF Downloads 110
1733 BI- And Tri-Metallic Catalysts for Hydrogen Production from Hydrogen Iodide Decomposition

Authors: Sony, Ashok N. Bhaskarwar

Abstract:

Production of hydrogen from a renewable raw material without any co-synthesis of harmful greenhouse gases is the current need for sustainable energy solutions. The sulfur-iodine (SI) thermochemical cycle, using intermediate chemicals, is an efficient process for producing hydrogen at a much lower temperature than that required for the direct splitting of water. No net byproduct forms in the cycle. Hydrogen iodide (HI) decomposition is a crucial reaction in this cycle, as the product, hydrogen, forms only in this step. It is an endothermic, reversible, and equilibrium-limited reaction. The theoretical equilibrium conversion at 550°C is just a meagre of 24%. There is a growing interest, therefore, in enhancing the HI conversion to near-equilibrium values at lower reaction temperatures and by possibly improving the rate. The reaction is relatively slow without a catalyst, and hence catalytic decomposition of HI has gained much significance. Bi-metallic Ni-Co, Ni-Mn, Co-Mn, and tri-metallic Ni-Co-Mn catalysts over zirconia support were tested for HI decomposition reaction. The catalysts were synthesized via a sol-gel process wherein Ni was 3wt% in all the samples, and Co and Mn had equal weight ratios in the Co-Mn catalyst. Powdered X-ray diffraction and Brunauer-Emmett-Teller surface area characterizations indicated the polycrystalline nature and well-developed mesoporous structure of all the samples. The experiments were performed in a vertical laboratory-scale packed bed reactor made of quartz, and HI (55 wt%) was fed along with nitrogen at a WHSV of 12.9 hr⁻¹. Blank experiments at 500°C for HI decomposition suggested conversion of less than 5%. The activities of all the different catalysts were checked at 550°C, and the highest conversion of 23.9% was obtained with the tri-metallic 3Ni-Co-Mn-ZrO₂ catalyst. The decreasing order of the performance of catalysts could be expressed as: 3Ni-Co-Mn-ZrO₂ > 3Ni-2Co-ZrO₂ > 3Ni-2Mn-ZrO₂ > 2.5Co-2.5Mn-ZrO₂. The tri-metallic catalyst remained active till 360 mins at 550°C without any observable drop in its activity/stability. Among the explored catalyst compositions, the tri-metallic catalyst certainly has a better performance for HI conversion when compared to the bi-metallic ones. Owing to their low costs and ease of preparation, these trimetallic catalysts could be used for large-scale hydrogen production.

Keywords: sulfur-iodine cycle, hydrogen production, hydrogen iodide decomposition, bi-, and tri-metallic catalysts

Procedia PDF Downloads 186
1732 Leadership in the Era of AI: Growing Organizational Intelligence

Authors: Mark Salisbury

Abstract:

The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly.

Keywords: metaverse, generative artificial intelligence, automation, leadership, organizational intelligence, wisdom worker

Procedia PDF Downloads 41