Search results for: management skills and values
6302 Seeking Safe Haven: An Analysis of Gold Performance during Periods of High Volatility
Authors: Gerald Abdesaken, Thomas O. Miller
Abstract:
This paper analyzes the performance of gold as a safe-haven investment. Assuming high market volatility as an impetus to seek a safe haven in gold, the return of gold relative to the stock market, as measured by the S&P 500, is tracked. Using the Chicago Board Options Exchange (CBOE) volatility index (VIX) as a measure of stock market volatility, various criteria are established for when an investor would seek a safe haven to avoid high levels of risk. The results show that in a vast majority of cases, the S&P 500 outperforms gold during these periods of high volatility and suggests investors who seek safe haven are underperforming the market.Keywords: gold, portfolio management, safe haven, VIX
Procedia PDF Downloads 1686301 To Compare Norepinephrine and Norepinephrine with Methylene Blue for the Management of Septic Shock
Authors: K. Rajarajeswaran, Krishna Prasad
Abstract:
Introduction: Refractory shock is a typical consequence of sepsis that does not improve with standard vasopressor therapy. A possible adjuvant therapeutic option for treating refractory shock in sepsis is methylene blue. This study looked at the effects of intravenous methylene blue plus norepinephrine given as a single bolus infusion on mortality and hemodynamic improvement in patients suffering from refractory shock. Methodology: This six-month observational prospective study was carried out at an intensive care unit, teaching hospital, and medical college. It involved 112 patients who had been diagnosed with refractory septic shock and needed vasopressor medication. Group B received injection norepinephrine 0.01 µg/kg/min infusion alone, while Group A received injection methylene blue 2 mg/kg iv single bolus (fixed dose) in addition to injection norepinephrine 0.01 µg/kg/min infusion. Both groups' noradrenaline doses were titrated to reach the desired MAP of 60–75 mm Hg. The amount of norepinephrine needed to sustain a MAP of more than 60 mm Hg was the data gathered. Serum lactate, procalcitonin level, C-reactive protein, length of stay in the intensive care unit (ICU), sequential organ failure assessment (SOFA) score, and duration of mechanical ventilation, incidence of acute kidney injury (AKI), and mortality were compared. Results: A total of 112 patients with refractory shock were included in the study. With the use of IV methylene blue, 36 (59.3%) patients showed significant improvement in MAP within 2 hours (77.12 ± 8.90 vs 74.28 ± 21.84, p = 0.005). Responders were 4.009 times more likely to have vasopressor-free time within 24 hours (19.5% vs 6.1%, p = 0.022, odds ratio 5.017, 95% confidence interval, 1.110–14.283). The serum lactate was lower, and urine output was higher in group I than in group II (p <0.05). Group I had a significantly greater reduction in SOFA score in 12 hours than group II. However, there was no significant difference in terms of mortality, length of ICU stay, ventilator free days, and incidence of AKI. In the responder group, there was a significant increase in the MAP and decrease in vasopressor requirement pre- and post-infusion of methylene blue (p < 0.05). Responder had shorter vasopressor-free days as compared with non-responder (5.44 vs 6.99, p = 0.007). Conclusion: When administered as adjuvant therapy, a single-dose bolus infusion of Methylene Blue plus Norepinephrine may aid in meeting early resuscitation goals for the management of patients with septic shock. But the patients' death rate, ICU stay duration, ventilator-free days, or incidence of AKI were unchanged.Keywords: norepinephrine, methylene blue, shock, vasopressor
Procedia PDF Downloads 256300 Clinical and Epidemiological Profile of Patients with Chronic Obstructive Pulmonary Disease in a Medical Institution from the City of Medellin, Colombia
Authors: Camilo Andres Agudelo-Velez, Lina María Martinez-Sanchez, Natalia Perilla-Hernandez, Maria De Los Angeles Rodriguez-Gazquez, Felipe Hernandez-Restrepo, Dayana Andrea Quintero-Moreno, Camilo Ruiz-Mejia, Isabel Cristina Ortiz-Trujillo, Monica Maria Zuluaga-Quintero
Abstract:
Chronic obstructive pulmonary disease is common condition, characterized by a persistent blockage of airflow, partially reversible and progressive, that represents 5% of total deaths around the world, and it is expected to become the third leading cause of death by 2030. Objective: To establish the clinical and epidemiological profile of patients with chronic obstructive pulmonary disease in a medical institution from the city of Medellin, Colombia. Methods: A cross-sectional study was performed, with a sample of 50 patients with a diagnosis of chronic obstructive pulmonary disease in a private institution in Medellin, during 2015. The software SPSS vr. 20 was used for the statistical analysis. For the quantitative variables, averages, standard deviations, and maximun and minimun values were calculated, while for ordinal and nominal qualitative variables, proportions were estimated. Results: The average age was 73.5±9.3 years, 52% of the patients were women, 50% of them had retired, 46% ere married and 80% lived in the city of Medellín. The mean time of diagnosis was 7.8±1.3 years and 100% of the patients were treated at the internal medicine service. The most common clinical features were: 36% were classified as class D for the disease, 34% had a FEV1 <30%, 88% had a history of smoking and 52% had oxygen therapy at home. Conclusion: It was found that class D was the most common, and the majority of the patients had a history of smoking, indicating the need to strengthen promotion and prevention strategies in this regard.Keywords: pulmonary disease, chronic obstructive, pulmonary medicine, oxygen inhalation therapy
Procedia PDF Downloads 4496299 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case
Authors: Arzu K. Kamberli, Tolga Ulusoy
Abstract:
Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.Keywords: econophysics, financial crisis, Gauss's Law, physics
Procedia PDF Downloads 1586298 Effect of Mixture of Flaxseed and Pumpkin Seeds Powder on Hypercholesterolemia
Authors: Zahra Ashraf
Abstract:
Flax and pumpkin seeds are a rich source of unsaturated fatty acids, antioxidants and fiber, known to have anti-atherogenic properties. Hypercholesterolemia is a state characterized by the elevated level of cholesterol in the blood. This research was designed to study the effect of flax and pumpkin seeds powder mixture on hypercholesterolemia and body weight. Rat’s species were selected as human representative. Thirty male albino rats were divided into three groups: a control group, a CD-chol group (control diet+cholesterol) fed with 1.5% cholesterol and FP-chol group (flaxseed and pumpkin seed powder+ cholesterol) fed with 1.5% cholesterol. Flax and pumpkin seed powder mixed at proportion of (5/1) (omega-3 and omega-6). Blood samples were collected to examine lipid profile and body weight was also measured. Thus the data was subjected to analysis of variance. In CD-chol group, body weight, total cholesterol TC, triacylglycerides TG in plasma, plasma LDL-C, ratio significantly increased with a decrease in plasma HDL (good cholesterol). In FP-chol group lipid parameters and body weights were decreased significantly with an increase in HDL and decrease in LDL (bad cholesterol). The mean values of body weight, total cholesterol, triglycerides, low density lipoprotein and high density lipoproteins in FP-chol group were 240.66±11.35g, 59.60±2.20mg/dl, 50.20±1.79 mg/dl, 36.20±1.62mg/dl, 36.40±2.20 mg/dl, respectively. Flaxseed and pumpkin seeds powder mixture showed reduction in body weight, serum cholesterol, low density lipoprotein and triglycerides. While significant increase was shown in high density lipoproteins when given to hypercholesterolemic rats. Our results suggested that flax and pumpkin seed mixture has hypocholesterolemic effects which were probably mediated by polyunsaturated fatty acids (omega-3 and omega-6) present in seed mixture.Keywords: hypercolesterolemia, omega 3 and omega 6 fatty acids, cardiovascular diseases
Procedia PDF Downloads 4226297 Analysis of Rural Roads in Developing Countries Using Principal Component Analysis and Simple Average Technique in the Development of a Road Safety Performance Index
Authors: Muhammad Tufail, Jawad Hussain, Hammad Hussain, Imran Hafeez, Naveed Ahmad
Abstract:
Road safety performance index is a composite index which combines various indicators of road safety into single number. Development of a road safety performance index using appropriate safety performance indicators is essential to enhance road safety. However, a road safety performance index in developing countries has not been given as much priority as needed. The primary objective of this research is to develop a general Road Safety Performance Index (RSPI) for developing countries based on the facility as well as behavior of road user. The secondary objectives include finding the critical inputs in the RSPI and finding the better method of making the index. In this study, the RSPI is developed by selecting four main safety performance indicators i.e., protective system (seat belt, helmet etc.), road (road width, signalized intersections, number of lanes, speed limit), number of pedestrians, and number of vehicles. Data on these four safety performance indicators were collected using observation survey on a 20 km road section of the National Highway N-125 road Taxila, Pakistan. For the development of this composite index, two methods are used: a) Principal Component Analysis (PCA) and b) Equal Weighting (EW) method. PCA is used for extraction, weighting, and linear aggregation of indicators to obtain a single value. An individual index score was calculated for each road section by multiplication of weights and standardized values of each safety performance indicator. However, Simple Average technique was used for weighting and linear aggregation of indicators to develop a RSPI. The road sections are ranked according to RSPI scores using both methods. The two weighting methods are compared, and the PCA method is found to be much more reliable than the Simple Average Technique.Keywords: indicators, aggregation, principle component analysis, weighting, index score
Procedia PDF Downloads 1676296 Case Report of a Secretory Carcinoma of the Salivary Gland: Clinical Management Following High-Grade Transformation
Authors: Wissam Saliba, Mandy Nicholson
Abstract:
Secretory carcinoma (SC) is a rare type of salivary gland cancer. It was first realized as a distinct type of malignancy in 2010and wasinitially termed “mammary analogue secretory carcinoma” because of similarities with secretory breast cancer. The name was later changed to SC. Most SCs originate in parotid glands, and most harbour a rare gene mutation: ETV6-NTRK3. This mutation is rare in common cancers and common in rare cancers; it is present in most secretory carcinomas. Disease outcomes for SC are usually described as favourable as many cases of SC are lowgrade (LG), and cancer growth is slow. In early stages, localized therapy is usually indicated (surgery and/or radiation). Despitea favourable prognosis, a sub-set of casescan be much more aggressive.These cases tend to be of high-grade(HG).HG casesare associated with a poorer prognosis.Management of such cases can be challenging due to limited evidence for effective systemic therapy options. This case report describes the clinical management of a 46-year-oldmale patient with a unique case of SC. He was initially diagnosed with a low/intermediate grade carcinoma of the left parotid gland in 2009; he was treated with surgery and adjuvant radiation. Surgical pathology favoured primary salivary adenocarcinoma, and 2 lymph nodes were positive for malignancy. SC was not yet realized as a distinct type of cancerat the time of diagnosis, and the pathology reportvalidated this gap by stating that the specimen lacked features of the defined types of salivary carcinoma.Slow-growing pulmonary nodules were identified in 2017. In 2020, approximately 11 years after the initial diagnosis, the patient presented with malignant pleural effusion. Pathology from a pleural biopsy was consistent with metastatic poorly differentiated cancer of likely parotid origin, likely mammary analogue secretory carcinoma. The specimen was sent for Next Generation Sequencing (NGS); ETV6-NTRK3 gene fusion was confirmed, and systemic therapy was initiated.One cycle ofcarboplatin/paclitaxel was given in June 2020. He was switched to Larotrectinib (NTRK inhibitor (NTRKi)) later that month. Larotrectinib continued for approximately 9 months, with discontinuation in March 2021 due to disease progression. A second-generation NTRKi (Selitrectinib) was accessed and prescribedthrough a single patient study. Selitrectinib was well tolerated. The patient experienced a complete radiological response within~4 months. Disease progression occurred once again in October 2021. Progression was slow, and Selitrectinib continuedwhile the medical team performed a thorough search for additional treatment options. In January 2022, a liver lesion biopsy was performed, and NGS showed an NTRKG623R solvent-front resistance mutation. Various treatment pathways were considered. The patient pursuedanother investigational NTRKi through a clinical trial, and Selitrectinib was discontinued in July 2022. Excellent performance status was maintained throughout the entire course of treatment.It can be concluded that NTRK inhibitors provided satisfactory treatment efficacy and tolerance for this patient with high-grade transformation and NTRK gene fusion cancer. In the future, more clinical research is needed on systemic treatment options for high-grade transformations in NTRK gene fusion SCs.Keywords: secretory carcinoma, high-grade transformations, NTRK gene fusion, NTRK inhibitor
Procedia PDF Downloads 1116295 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 836294 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 2186293 Proposed Algorithms to Assess Concussion Potential in Rear-End Motor Vehicle Collisions: A Meta-Analysis
Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin McCleery
Abstract:
Introduction: Mild traumatic brain injuries, also referred to as concussions, represent an increasing burden to society. Due to limited objective diagnostic measures, concussions are diagnosed by assessing subjective symptoms, often leading to disputes to their presence. Common biomechanical measures associated with concussion are high linear and/or angular acceleration to the head. With regards to linear acceleration, approximately 80g’s has previously been shown to equate with a 50% probability of concussion. Motor vehicle collisions (MVCs) are a leading cause of concussion, due to high head accelerations experienced. The change in velocity (delta-V) of a vehicle in an MVC is an established metric for impact severity. As acceleration is the rate of delta-V with respect to time, the purpose of this paper is to determine the relation between delta-V (and occupant parameters) with linear head acceleration. Methods: A meta-analysis was conducted for manuscripts collected using the following keywords: head acceleration, concussion, brain injury, head kinematics, delta-V, change in velocity, motor vehicle collision, and rear-end. Ultimately, 280 studies were surveyed, 14 of which fulfilled the inclusion criteria as studies investigating the human response to impacts, reporting head acceleration, and delta-V of the occupant’s vehicle. Statistical analysis was conducted with SPSS and R. The best fit line analysis allowed for an initial understanding of the relation between head acceleration and delta-V. To further investigate the effect of occupant parameters on head acceleration, a quadratic model and a full linear mixed model was developed. Results: From the 14 selected studies, 139 crashes were analyzed with head accelerations and delta-V values ranging from 0.6 to 17.2g and 1.3 to 11.1 km/h, respectively. Initial analysis indicated that the best line of fit (Model 1) was defined as Head Acceleration = 0.465Keywords: acceleration, brain injury, change in velocity, Delta-V, TBI
Procedia PDF Downloads 2376292 Identification of Service Quality Determinants in the Hotel Sector - A Conceptual Review
Authors: Asem M. Othman
Abstract:
The expansion of the hospitality industry is unmistakable. Services, by nature, are intangible. Hence, service quality, in general, is a complicated process to be measured and evaluated. Hotels, as a service sector and part of the hospitality industry, are growing rapidly. This research paper was carried out to identify the quality determinants that may affect hotel guests’ service quality perception. In this research paper, each quality determinant will be discussed, illustrated, and justified thoroughly via a systematic literature review. The purpose of this paper is to set the stage to measure the significant influence of the service quality determinants on guest satisfaction. The knowledge produced from this study will assist practitioners and/or hotel service providers to imply into their policies.Keywords: service quality, hotel service, quality management, quality determinants
Procedia PDF Downloads 2796291 Examination of the South African Fire Legislative Framework
Authors: Mokgadi Julia Ngoepe-Ntsoane
Abstract:
The article aims to make a case for a legislative framework for the fire sector in South Africa. Robust legislative framework is essential for empowering those with obligatory mandate within the sector. This article contributes to the body of knowledge in the field of policy reviews particularly with regards to the legal framework. It has been observed overtime that the scholarly contributions in this field are limited. Document analysis was the methodology selected for the investigation of the various legal frameworks existing in the country. It has been established that indeed the national legislation on the fire industry does not exist in South Africa. From the documents analysed, it was revealed that the sector is dominated by cartels who are exploiting the new entrants to the market particularly SMEs. It is evident that these cartels are monopolising the system as they have long been operating in the system turning it into self- owned entities. Commitment to addressing the challenges faced by fire services and creating a framework for the evolving role that fire brigade services are expected to execute in building safer and sustainable communities is vital. Legislation for the fire sector ought to be concluded with immediate effect. The outdated national fire legislation has necessitated the monopolisation and manipulation of the system by dominating organisations which cause a painful discrimination and exploitation of smaller service providers to enter the market for trading in that occupation. The barrier to entry bears long term negative effects on national priority areas such as employment creation, poverty, and others. This monopolisation and marginalisation practices by cartels in the sector calls for urgent attention by government because if left attended, it will leave a lot of people particularly women and youth being disadvantaged and frustrated. The downcast syndrome exercised within the fire sector has wreaked havoc and is devastating. This is caused by cartels that have been within the sector for some time, who know the strengths and weaknesses of processes, shortcuts, advantages and consequences of various actions. These people take advantage of new entrants to the sector who in turn find it difficult to manoeuvre, find the market dissonant and end up giving up their good ideas and intentions. There are many pieces of legislation which are industry specific such as housing, forestry, agriculture, health, security, environmental which are used to regulate systems within the institutions involved. Other regulations exist as bi-laws for guiding the management within the municipalities.Keywords: sustainable job creation, growth and development, transformation, risk management
Procedia PDF Downloads 1786290 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach
Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca
Abstract:
The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.Keywords: cactus pear, post-harvest losses, profit margin, value-chain
Procedia PDF Downloads 1426289 Culturally Diverse Working Teams in Finnish and Italian Oil and Gas Industry: Intersecting Differences in Organizational and Employee Interactions
Authors: Elisa Bertagna
Abstract:
The aim of the research is to study diversity issues and gender equality in the Finnish and Italian oil and gas companies. Particular attention is given to the effects on the organization’s and employees’ interactions resulting from intersecting social categories. The study is aimed to be settled in companies where social inequalities and diversity management problematics are present. Consequently, ten semi-structured interviews with key managers from the companies and four focus groups composed of culturally diverse employees aim to depict and analyze the situation from both points of view. Social discourse and intersectionality are employed as the analysis methods. Trainings, workshops, and suggestions are to be offered in the required situations.Keywords: diversity, gender, intersectionality, oil and gas companies, social constructionism
Procedia PDF Downloads 1846288 Roundabout Implementation Analyses Based on Traffic Microsimulation Model
Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković
Abstract:
Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.Keywords: before-after analysis, operating speed, capacity, design.
Procedia PDF Downloads 286287 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 4346286 Predicting Low Birth Weight Using Machine Learning: A Study on 53,637 Ethiopian Birth Data
Authors: Kehabtimer Shiferaw Kotiso, Getachew Hailemariam, Abiy Seifu Estifanos
Abstract:
Introduction: Despite the highest share of low birth weight (LBW) for neonatal mortality and morbidity, predicting births with LBW for better intervention preparation is challenging. This study aims to predict LBW using a dataset encompassing 53,637 birth cohorts collected from 36 primary hospitals across seven regions in Ethiopia from February 2022 to June 2024. Methods: We identified ten explanatory variables related to maternal and neonatal characteristics, including maternal education, age, residence, history of miscarriage or abortion, history of preterm birth, type of pregnancy, number of livebirths, number of stillbirths, antenatal care frequency, and sex of the fetus to predict LBW. Using WEKA 3.8.2, we developed and compared seven machine learning algorithms. Data preprocessing included handling missing values, outlier detection, and ensuring data integrity in birth weight records. Model performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and area under the Receiver Operating Characteristic curve (ROC AUC) using 10-fold cross-validation. Results: The results demonstrated that the decision tree, J48, logistic regression, and gradient boosted trees model achieved the highest accuracy (94.5% to 94.6%) with a precision of 93.1% to 93.3%, F1-score of 92.7% to 93.1%, and ROC AUC of 71.8% to 76.6%. Conclusion: This study demonstrates the effectiveness of machine learning models in predicting LBW. The high accuracy and recall rates achieved indicate that these models can serve as valuable tools for healthcare policymakers and providers in identifying at-risk newborns and implementing timely interventions to achieve the sustainable developmental goal (SDG) related to neonatal mortality.Keywords: low birth weight, machine learning, classification, neonatal mortality, Ethiopia
Procedia PDF Downloads 326285 A Method for Quantitative Assessment of the Dependencies between Input Signals and Output Indicators in Production Systems
Authors: Maciej Zaręba, Sławomir Lasota
Abstract:
Knowing the degree of dependencies between the sets of input signals and selected sets of indicators that measure a production system's effectiveness is of great importance in the industry. This paper introduces the SELM method that enables the selection of sets of input signals, which affects the most the selected subset of indicators that measures the effectiveness of a production system. For defined set of output indicators, the method quantifies the impact of input signals that are gathered in the continuous monitoring production system.Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems
Procedia PDF Downloads 1236284 Groundwater Investigation Using Resistivity Method and Drilling for Irrigation during the Dry Season in Lwantonde District, Uganda
Authors: Tamale Vincent
Abstract:
Groundwater investigation is the investigation of underground formations to understand the hydrologic cycle, known groundwater occurrences, and identify the nature and types of aquifers. There are different groundwater investigation methods and surface geophysical method is one of the groundwater investigation more especially the Geoelectrical resistivity Schlumberger configuration method which provides valuable information regarding the lateral and vertical successions of subsurface geomaterials in terms of their individual thickness and corresponding resistivity values besides using surface geophysical method, hydrogeological and geological investigation methods are also incorporated to aid in preliminary groundwater investigation. Investigation for groundwater in lwantonde district has been implemented. The area project is located cattle corridor and the dry seasonal troubles the communities in lwantonde district of which 99% of people living there are farmers, thus making agriculture difficult and local government to provide social services to its people. The investigation was done using the Geoelectrical resistivity Schlumberger configuration method. The measurement point is located in the three sub-counties, with a total of 17 measurement points. The study location is at 0025S, 3110E, and covers an area of 160 square kilometers. Based on the results of the Geoelectrical information data, it was found two types of aquifers, which are open aquifers in depth ranging from six meters to twenty-two meters and a confined aquifer in depth ranging from forty-five meters to eighty meters. In addition to the Geoelectrical information data, drilling was done at an accessible point by heavy equipment in the Lwakagura village, Kabura sub-county. At the drilling point, artesian wells were obtained at a depth of eighty meters and can rise to two meters above the soil surface. The discovery of artesian well is then used by residents to meet the needs of clean water and for irrigation considering that in this area most wells contain iron content.Keywords: artesian well, geoelectrical, lwantonde, Schlumberger
Procedia PDF Downloads 1336283 Energy Options and Environmental Impacts of Carbon Dioxide Utilization Pathways
Authors: Evar C. Umeozor, Experience I. Nduagu, Ian D. Gates
Abstract:
The energy requirements of carbon dioxide utilization (CDU) technologies/processes are diverse, so also are their environmental footprints. This paper explores the energy and environmental impacts of systems for CO₂ conversion to fuels, chemicals, and materials. Energy needs of the technologies and processes deployable in CO₂ conversion systems are met by one or combinations of hydrogen (chemical), electricity, heat, and light. Likewise, the environmental footprint of any CO₂ utilization pathway depends on the systems involved. So far, evaluation of CDU systems has been constrained to particular energy source/type or a subset of the overall system needed to make CDU possible. This introduces limitations to the general understanding of the energy and environmental implications of CDU, which has led to various pitfalls in past studies. A CDU system has an energy source, CO₂ supply, and conversion units. We apply a holistic approach to consider the impacts of all components in the process, including various sources of energy, CO₂ feedstock, and conversion technologies. The electricity sources include nuclear power, renewables (wind and solar PV), gas turbine, and coal. Heat is supplied from either electricity or natural gas, and hydrogen is produced from either steam methane reforming or electrolysis. The CO₂ capture unit uses either direct air capture or post-combustion capture via amine scrubbing, where applicable, integrated configurations of the CDU system are explored. We demonstrate how the overall energy and environmental impacts of each utilization pathway are obtained by aggregating the values for all components involved. Proper accounting of the energy and emission intensities of CDU must incorporate total balances for the utilization process and differences in timescales between alternative conversion pathways. Our results highlight opportunities for the use of clean energy sources, direct air capture, and a number of promising CO₂ conversion pathways for producing methanol, ethanol, synfuel, urea, and polymer materials.Keywords: carbon dioxide utilization, processes, energy options, environmental impacts
Procedia PDF Downloads 1516282 Multiple Version of Roman Domination in Graphs
Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera
Abstract:
In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values
Procedia PDF Downloads 1126281 Predicting the Areal Development of the City of Mashhad with the Automaton Fuzzy Cell Method
Authors: Mehran Dizbadi, Daniyal Safarzadeh, Behrooz Arastoo, Ansgar Brunn
Abstract:
Rapid and uncontrolled expansion of cities has led to unplanned aerial development. In this way, modeling and predicting the urban growth of a city helps decision-makers. In this study, the aspect of sustainable urban development has been studied for the city of Mashhad. In general, the prediction of urban aerial development is one of the most important topics of modern town management. In this research, using the Cellular Automaton (CA) model developed for geo data of Geographic Information Systems (GIS) and presenting a simple and powerful model, a simulation of complex urban processes has been done.Keywords: urban modeling, sustainable development, fuzzy cellular automaton, geo-information system
Procedia PDF Downloads 1366280 Synthetic Bis(2-Pyridylmethyl)Amino-Chloroacetyl Chloride- Ethylenediamine-Grafted Graphene Oxide Sheets Combined with Magnetic Nanoparticles: Remove Metal Ions and Catalytic Application
Authors: Laroussi Chaabane, Amel El Ghali, Emmanuel Beyou, Mohamed Hassen V. Baouab
Abstract:
In this research, the functionalization of graphene oxide sheets by ethylenediamine (EDA) was accomplished and followed by the grafting of bis(2-pyridylmethyl) amino group (BPED) onto the activated graphene oxide sheets in the presence of chloroacetylchloride (CAC) and then combined with magnetic nanoparticles (Fe₃O₄NPs) to produce a magnetic graphene-based composite [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. The physicochemical properties of [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] composites were investigated by Fourier transform infrared (FT-IR), scanning electron microscopy (SEM), X-ray diffraction (XRD), thermogravimetric analysis (TGA). Additionally, the catalysts can be easily recycled within ten seconds by using an external magnetic field. Moreover, [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] was used for removing Cu(II) ions from aqueous solutions using a batch process. The effect of pH, contact time and temperature on the metal ions adsorption were investigated, however weakly dependent on ionic strength. The maximum adsorption capacity values of Cu(II) on the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] at the pH of 6 is 3.46 mmol.g⁻¹. To examine the underlying mechanism of the adsorption process, pseudo-first, pseudo-second-order, and intraparticle diffusion models were fitted to experimental kinetic data. Results showed that the pseudo-second-order equation was appropriate to describe the Cu (II) adsorption by [(Go-EDA-CAC)@Fe₃O₄NPs-BPED]. Adsorption data were further analyzed by the Langmuir, Freundlich, and Jossens adsorption approaches. Additionally, the adsorption properties of the [(Go-EDA-CAC)@Fe₃O₄NPs-BPED], their reusability (more than 6 cycles) and durability in the aqueous solutions open the path to removal of Cu(II) from water solution. Based on the results obtained, we report the activity of Cu(II) supported on [(Go-EDA-CAC)@Fe₃O₄NPs-BPED] as a catalyst for the cross-coupling of symmetric alkynes.Keywords: graphene, magnetic nanoparticles, adsorption kinetics/isotherms, cross coupling
Procedia PDF Downloads 1436279 Change of Education Business in the Age of 5G
Authors: Heikki Ruohomaa, Vesa Salminen
Abstract:
Regions are facing huge competition to attract companies, businesses, inhabitants, students, etc. This way to improve living and business environment, which is rapidly changing due to digitalization. On the other hand, from the industry's point of view, the availability of a skilled labor force and an innovative environment are crucial factors. In this context, qualified staff has been seen to utilize the opportunities of digitalization and respond to the needs of future skills. World Manufacturing Forum has stated in the year 2019- report that in next five years, 40% of workers have to change their core competencies. Through digital transformation, new technologies like cloud, mobile, big data, 5G- infrastructure, platform- technology, data- analysis, and social networks with increasing intelligence and automation, enterprises can capitalize on new opportunities and optimize existing operations to achieve significant business improvement. Digitalization will be an important part of the everyday life of citizens and present in the working day of the average citizen and employee in the future. For that reason, the education system and education programs on all levels of education from diaper age to doctorate have been directed to fulfill this ecosystem strategy. Goal: The Fourth Industrial Revolution will bring unprecedented change to societies, education organizations and business environments. This article aims to identify how education, education content, the way education has proceeded, and overall whole the education business is changing. Most important is how we should respond to this inevitable co- evolution. Methodology: The study aims to verify how the learning process is boosted by new digital content, new learning software and tools, and customer-oriented learning environments. The change of education programs and individual education modules can be supported by applied research projects. You can use them in making proof- of- the concept of new technology, new ways to teach and train, and through the experiences gathered change education content, way to educate and finally education business as a whole. Major findings: Applied research projects can prove the concept- phases on real environment field labs to test technology opportunities and new tools for training purposes. Customer-oriented applied research projects are also excellent for students to make assignments and use new knowledge and content and teachers to test new tools and create new ways to educate. New content and problem-based learning are used in future education modules. This article introduces some case study experiences on customer-oriented digital transformation projects and how gathered knowledge on new digital content and a new way to educate has influenced education. The case study is related to experiences of research projects, customer-oriented field labs/learning environments and education programs of Häme University of Applied Sciences.Keywords: education process, digitalization content, digital tools for education, learning environments, transdisciplinary co-operation
Procedia PDF Downloads 1806278 Clinical Features, Diagnosis and Treatment Outcomes in Necrotising Autoimmune Myopathy: A Rare Entity in the Spectrum of Inflammatory Myopathies
Authors: Tamphasana Wairokpam
Abstract:
Inflammatory myopathies (IMs) have long been recognised as a heterogenous family of myopathies with acute, subacute, and sometimes chronic presentation and are potentially treatable. Necrotizing autoimmune myopathies (NAM) are a relatively new subset of myopathies. Patients generally present with subacute onset of proximal myopathy and significantly elevated creatinine kinase (CK) levels. It is being increasingly recognised that there are limitations to the independent diagnostic utility of muscle biopsy. Immunohistochemistry tests may reveal important information in these cases. The traditional classification of IMs failed to recognise NAM as a separate entity and did not adequately emphasize the diversity of IMs. This review and case report on NAM aims to highlight the heterogeneity of this entity and focus on the distinct clinical presentation, biopsy findings, specific auto-antibodies implicated, and available treatment options with prognosis. This article is a meta-analysis of literatures on NAM and a case report illustrating the clinical course, investigation and biopsy findings, antibodies implicated, and management of a patient with NAM. The main databases used for the search were Pubmed, Google Scholar, and Cochrane Library. Altogether, 67 publications have been taken as references. Two biomarkers, anti-signal recognition protein (SRP) and anti- hydroxyl methylglutaryl-coenzyme A reductase (HMGCR) Abs, have been found to have an association with NAM in about 2/3rd of cases. Interestingly, anti-SRP associated NAM appears to be more aggressive in its clinical course when compared to its anti-HMGCR associated counterpart. Biopsy shows muscle fibre necrosis without inflammation. There are reports of statin-induced NAM where progression of myopathy has been seen even after discontinuation of statins, pointing towards an underlying immune mechanism. Diagnosisng NAM is essential as it requires more aggressive immunotherapy than other types of IMs. Most cases are refractory to corticosteroid monotherapy. Immunosuppressive therapy with other immunotherapeutic agents such as IVIg, rituximab, mycophenolate mofetil, azathioprine has been explored and found to have a role in the treatment of NAM. In conclusion,given the heterogeneity of NAM, it appears that NAM is not just a single entity but consists of many different forms, despite the similarities in presentation and its classification remains an evolving field. A thorough understanding of underlying mechanism and the clinical correlation with antibodies associated with NAM is essential for efficacious management and disease prognostication.Keywords: inflammatory myopathies, necrotising autoimmune myopathies, anti-SRP antibody, anti-HMGCR antibody, statin induced myopathy
Procedia PDF Downloads 1086277 Fatty Acid Structure and Composition Effects of Biodiesel on Its Oxidative Stability
Authors: Gelu Varghese, Khizer Saeed
Abstract:
Biodiesel is as a mixture of mono-alkyl esters of long chain fatty acids derived from vegetable oils or animal fats. Recent studies in the literature suggest that end property of biodiesel such as its oxidative stability (OS) is highly influenced by the structure and composition of its alkyl esters than by environmental conditions. The structure and composition of these long chain fatty acid components have been also associated with trends in Cetane number, heat of combustion, cold flow properties viscosity, and lubricity. In the present work, detailed investigation has been carried out to decouple and correlate the fatty acid structure indices of biodiesel such as degree of unsaturation, chain length, double bond orientation, and composition with its oxidative stability. Measurements were taken using the EN14214 established Rancimat oxidative stability test method (EN141120). Firstly, effects of the degree of unsaturation, chain length and bond orientation were tested for the pure fatty acids to establish their oxidative stability. Results for pure Fatty acid show that Saturated FAs are more stable than unsaturated ones to oxidation; superior oxidative stability can be achieved by blending biodiesel fuels with relatively high in saturated fatty acid contents. A lower oxidative stability is noticed when a greater quantity of double bonds is present in the methyl ester. A strong inverse relationship with the number of double bonds and the Rancimat IP values can be identified. Trans isomer Methyl elaidate shows superior stability to oxidation than its cis isomer methyl oleate (7.2 vs. 2.3). Secondly, the effects of the variation in the composition of the biodiesel were investigated and established. Finally, biodiesels with varying structure and composition were investigated and correlated.Keywords: biodiesel, fame, oxidative stability, fatty acid structure, acid composition
Procedia PDF Downloads 2906276 3D Biomechanical Analysis in Shot Put Techniques of International Throwers
Authors: Satpal Yadav, Ashish Phulkar, Krishna K. Sahu
Abstract:
Aim: The research aims at doing a 3 Dimension biomechanical analysis in the shot put techniques of International throwers to evaluate the performance. Research Method: The researcher adopted the descriptive method and the data was subjected to calculate by using Pearson’s product moment correlation for the correlation of the biomechanical parameters with the performance of shot put throw. In all the analyses, the 5% critical level (p ≤ 0.05) was considered to indicate statistical significance. Research Sample: Eight (N=08) international shot putters using rotational/glide technique in male category was selected as subjects for the study. The researcher used the following methods and tools to obtain reliable measurements the instrument which was used for the purpose of present study namely the tesscorn slow-motion camera, specialized motion analyzer software, 7.260 kg Shot Put (for a male shot-putter) and steel tape. All measurement pertaining to the biomechanical variables was taken by the principal investigator so that data collected for the present study was considered reliable. Results: The finding of the study showed that negative significant relationship between the angular velocity right shoulder, acceleration distance at pre flight (-0.70), (-0.72) respectively were obtained, the angular displacement of knee, angular velocity right shoulder and acceleration distance at flight (0.81), (0.75) and (0.71) respectively were obtained, the angular velocity right shoulder and acceleration distance at transition phase (0.77), (0.79) respectively were obtained and angular displacement of knee, angular velocity right shoulder, release velocity shot, angle of release, height of release, projected distance and measured distance as the values (0.76), (0.77), (-0.83), (-0.79), (-0.77), (0.99) and (1.00) were found higher than the tabulated value at 0.05 level of significance. On the other hand, there exists an insignificant relationship between the performance of shot put and acceleration distance [m], angular displacement shot, C.G at release and horizontal release distance on the technique of shot put.Keywords: biomechanics, analysis, shot put, international throwers
Procedia PDF Downloads 1926275 Fragility Analysis of a Soft First-Story Building in Mexico City
Authors: Rene Jimenez, Sonia E. Ruiz, Miguel A. Orellana
Abstract:
On 09/19/2017, a Mw = 7.1 intraslab earthquake occurred in Mexico causing the collapse of about 40 buildings. Many of these were 5- or 6-story buildings with soft first story; so, it is desirable to perform a structural fragility analysis of typical structures representative of those buildings and to propose a reliable structural solution. Here, a typical 5-story building constituted by regular R/C moment-resisting frames in the first story and confined masonry walls in the upper levels, similar to the collapsed structures on the 09/19/2017 Mexico earthquake, is analyzed. Three different structural solutions of the 5-story building are considered: S1) it is designed in accordance with the Mexico City Building Code-2004; S2) then, the column dimensions of the first story corresponding to S1 are reduced, and S3) viscous dampers are added at the first story of solution S2. A number of dynamic incremental analyses are performed for each structural solution, using a 3D structural model. The hysteretic behavior model of the masonry was calibrated with experiments performed at the Laboratory of Structures at UNAM. Ten seismic ground motions are used to excite the structures; they correspond to ground motions recorded in intermediate soil of Mexico City with a dominant period around 1s, where the structures are located. The fragility curves of the buildings are obtained for different values of the maximum inter-story drift demands. Results show that solutions S1 and S3 give place to similar probabilities of exceedance of a given value of inter-story drift for the same seismic intensity, and that solution S2 presents a higher probability of exceedance for the same seismic intensity and inter-story drift demand. Therefore, it is concluded that solution S3 (which corresponds to the building with soft first story and energy dissipation devices) can be a reliable solution from the structural point of view.Keywords: demand hazard analysis, fragility curves, incremental dynamic analyzes, soft-first story, structural capacity
Procedia PDF Downloads 1806274 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey
Authors: Mahdiyeh Zafaranchi
Abstract:
With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters
Procedia PDF Downloads 1156273 Study of Biofouling Wastewater Treatment Technology
Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang
Abstract:
The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater
Procedia PDF Downloads 115