Search results for: multiple group analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 36145

Search results for: multiple group analysis

865 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency

Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva

Abstract:

This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.

Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium

Procedia PDF Downloads 234
864 Detection of Egg Proteins in Food Matrices (2011-2021)

Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli

Abstract:

Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.

Keywords: allergens, food, egg proteins, immunoassay

Procedia PDF Downloads 138
863 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 132
862 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique

Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham

Abstract:

Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.

Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT

Procedia PDF Downloads 190
861 Hampering The 'Right to Know': Consequences of the Excessive Interpretation of the Notion of Exemption from the Right to Information

Authors: Tomasz Lewinski

Abstract:

The right to know becomes gradually recognised as an increasing number of states adopts national legislations regarding access to state-held information. Laws differ from each other in the scope of the right to information (hereinafter: RTI). In all regimes of RTI, there are exceptions from the general notion of the right. States’ authorities too often use exceptions to justify refusals to requests for state-held information. This paper sets out how states hamper RTI basing on the notion of exception and by not providing an effective procedure that could redress unlawful denials. This paper bases on two selected examples of RTI incorporation into the national legal regime, United Kingdom, and South Africa. It succinctly outlines the international standard given in Article 19 of the International Covenant on Civil and Political Rights (hereinafter: ICCPR) and its influence on the RTI in selected countries. It shortly demonstrates as a background to further analysis the Human Rights Committee’s jurisprudence and standards articulated by successive Special Rapporteurs on freedom of opinion and expression. Subsequently, it presents a brief comparison of these standards with the regional standards, namely the African Charter on Human and Peoples' Rights and the European Convention on Human Rights. It critically discusses the regimes of exceptions in RTI legislations in respective national laws. It shows how excessive these regimes are, what implications they have for the transparency in general. Also, the objective is to divide exceptions enumerated in legislations of selected states in relation to exceptions provided in Article 19 of the ICCPR. Basing on the established division of exceptions by its natures, it compares both regimes of exceptions related to the principle of national security. That is to compare jurisprudence of domestic courts, and overview practices of states’ authorities applied to RTI requests. The paper evaluates remedies available in legislations, including contexts of the length and costs of the subsequent proceedings. This provides a general assessment of the given mechanisms and present potential risks of its ineffectiveness. The paper relies on examination of the national legislations, comments of the credible non-governmental organisations (e.g. The Public's Right to Know Principles on Freedom of Information Legislation by the Article 19, The Tshwane Principles on National Security and the Right to Information), academics and also the research of the relevant judgements delivered by domestic and international courts. Conclusion assesses whether selected countries’ legislations go in line with international law and trends, whether the jurisprudence of the regional courts provide appropriate benchmarks for national courts to address RTI issues effectively. Furthermore, it identifies the largest disadvantages of current legislations and to what outcomes it leads in domestic courts jurisprudences. In the end, it provides recommendations and policy arguments for states to improve transparency and support local organisations in their endeavours to establish more transparent states and societies.

Keywords: access to information, freedom of information, national security, right to know, transparency

Procedia PDF Downloads 215
860 Teaching Academic Writing for Publication: A Liminal Threshold Experience Towards Development of Scholarly Identity

Authors: Belinda du Plooy, Ruth Albertyn, Christel Troskie-De Bruin, Ella Belcher

Abstract:

In the academy, scholarliness or intellectual craftsmanship is considered the highest level of achievement, culminating in being consistently successfully published in impactful, peer-reviewed journals and books. Scholarliness implies rigorous methods, systematic exposition, in-depth analysis and evaluation, and the highest level of critical engagement and reflexivity. However, being a scholar does not happen automatically when one becomes an academic or completes graduate studies. A graduate qualification is an indication of one’s level of research competence but does not necessarily prepare one for the type of scholarly writing for publication required after a postgraduate qualification has been conferred. Scholarly writing for publication requires a high-level skillset and a specific mindset, which must be intentionally developed. The rite of passage to become a scholar is an iterative process with liminal spaces, thresholds, transitions, and transformations. The journey from researcher to published author is often fraught with rejection, insecurity, and disappointment and requires resilience and tenacity from those who eventually triumph. It cannot be achieved without support, guidance, and mentorship. In this article, the authors use collective auto-ethnography (CAE) to describe the phases and types of liminality encountered during the liminal journey toward scholarship. The authors speak as long-time facilitators of Writing for Academic Publication (WfAP) capacity development events (training workshops and writing retreats) presented at South African universities. Their WfAP facilitation practice is structured around experiential learning principles that allow them to act as critical reading partners and reflective witnesses for the writer-participants of their WfAP events. They identify three essential facilitation features for the effective holding of a generative, liminal, and transformational writing space for novice academic writers in order to enable their safe passage through the various liminal spaces they encounter during their scholarly development journey. These features are that facilitators should be agents of disruption and liminality while also guiding writers through these liminal spaces; that there should be a sense of mutual trust and respect, shared responsibility and accountability in order for writers to produce publication-worthy scholarly work; and that this can only be accomplished with the continued application of high levels of sensitivity and discernment by WfAP facilitators. These are key features for successful WfAP scholarship training events, where focused, individual input triggers personal and professional transformational experiences, which in turn translate into high-quality scholarly outputs.

Keywords: academic writing, liminality, scholarship, scholarliness, threshold experience, writing for publication

Procedia PDF Downloads 44
859 Digital Advance Care Planning and Directives: Early Observations of Adoption Statistics and Responses from an All-Digital Consumer-Driven Approach

Authors: Robert L. Fine, Zhiyong Yang, Christy Spivey, Bonnie Boardman, Maureen Courtney

Abstract:

Importance: Barriers to traditional advance care planning (ACP) and advance directive (AD) creation have limited the promise of ACP/AD for individuals and families, the healthcare team, and society. Reengineering ACP by using a web-based, consumer-driven process has recently been suggested. We report early experience with such a process. Objective: Begin to analyze the potential of the creation and use of ACP/ADs as generated by a consumer-friendly, digital process by 1) assessing the likelihood that consumers would create ACP/ADs without structured intervention by medical or legal professionals, and 2) analyzing the responses to determine if the plans can help doctors better understand a person’s goals, preferences, and priorities for their medical treatments and the naming of healthcare agents. Design: The authors chose 900 users of MyDirectives.com, a digital ACP/AD tool, solely based on their state of residence in order to achieve proportional representation of all 50 states by population size and then reviewed their responses, summarizing these through descriptive statistics including treatment preferences, demographics, and revision of preferences. Setting: General United States population. Participants: The 900 participants had an average age of 50.8 years (SD = 16.6); 84.3% of the men and 91% of the women were in self-reported good health when signing their ADs. Main measures: Preferences regarding the use of life-sustaining treatments, where to spend final days, consulting a supportive and palliative care team, attempted cardiopulmonary resuscitation (CPR), autopsy, and organ and tissue donation. Results: Nearly 85% of respondents prefer cessation of life-sustaining treatments during their final days whenever those may be, 76% prefer to spend their final days at home or in a hospice facility, and 94% wanted their future doctors to consult a supportive and palliative care team. 70% would accept attempted CPR in certain limited circumstances. Most respondents would want an autopsy under certain conditions, and 62% would like to donate their organs. Conclusions and relevance: Analysis of early experience with an all-digital web-based ACP/AD platform demonstrates that individuals from a wide range of ages and conditions can engage in an interrogatory process about values, goals, preferences, and priorities for their medical treatments by developing advance directives and easily make changes to the AD created. Online creation, storage, and retrieval of advance directives has the potential to remove barriers to ACP/AD and, thus, to further improve patient-centered end-of-life care.

Keywords: Advance Care Plan, Advance Decisions, Advance Directives, Consumer; Digital, End of Life Care, Goals, Living Wills, Prefences, Universal Advance Directive, Statements

Procedia PDF Downloads 327
858 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 222
857 Treatment of Onshore Petroleum Drill Cuttings via Soil Washing Process: Characterization and Optimal Conditions

Authors: T. Poyai, P. Painmanakul, N. Chawaloesphonsiya, P. Dhanasin, C. Getwech, P. Wattana

Abstract:

Drilling is a key activity in oil and gas exploration and production. Drilling always requires the use of drilling mud for lubricating the drill bit and controlling the subsurface pressure. As drilling proceeds, a considerable amount of cuttings or rock fragments is generated. In general, water or Water Based Mud (WBM) serves as drilling fluid for the top hole section. The cuttings generated from this section is non-hazardous and normally applied as fill materials. On the other hand, drilling the bottom hole to reservoir section uses Synthetic Based Mud (SBM) of which synthetic oils are composed. The bottom-hole cuttings, SBM cuttings, is regarded as a hazardous waste, in accordance with the government regulations, due to the presence of hydrocarbons. Currently, the SBM cuttings are disposed of as an alternative fuel and raw material in cement kiln. Instead of burning, this work aims to propose an alternative for drill cuttings management under two ultimate goals: (1) reduction of hazardous waste volume; and (2) making use of the cleaned cuttings. Soil washing was selected as the major treatment process. The physiochemical properties of drill cuttings were analyzed, such as size fraction, pH, moisture content, and hydrocarbons. The particle size of cuttings was analyzed via light scattering method. Oil present in cuttings was quantified in terms of total petroleum hydrocarbon (TPH) through gas chromatography equipped with flame ionization detector (GC-FID). Other components were measured by the standard methods for soil analysis. Effects of different washing agents, liquid-to-solid (L/S) ratio, washing time, mixing speed, rinse-to-solid (R/S) ratio, and rinsing time were also evaluated. It was found that drill cuttings held the electrical conductivity of 3.84 dS/m, pH of 9.1, and moisture content of 7.5%. The TPH in cuttings existed in the diesel range with the concentration ranging from 20,000 to 30,000 mg/kg dry cuttings. A majority of cuttings particles held a mean diameter of 50 µm, which represented silt fraction. The results also suggested that a green solvent was considered most promising for cuttings treatment regarding occupational health, safety, and environmental benefits. The optimal washing conditions were obtained at L/S of 5, washing time of 15 min, mixing speed of 60 rpm, R/S of 10, and rinsing time of 1 min. After washing process, three fractions including clean cuttings, spent solvent, and wastewater were considered and provided with recommendations. The residual TPH less than 5,000 mg/kg was detected in clean cuttings. The treated cuttings can be then used for various purposes. The spent solvent held the calorific value of higher than 3,000 cal/g, which can be used as an alternative fuel. Otherwise, the recovery of the used solvent can be conducted using distillation or chromatography techniques. Finally, the generated wastewater can be combined with the produced water and simultaneously managed by re-injection into the reservoir.

Keywords: drill cuttings, green solvent, soil washing, total petroleum hydrocarbon (TPH)

Procedia PDF Downloads 155
856 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 65
855 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach

Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao

Abstract:

Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.

Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search

Procedia PDF Downloads 80
854 Fire Safe Medical Oxygen Delivery for Aerospace Environments

Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl

Abstract:

Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.

Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption

Procedia PDF Downloads 104
853 The Rise of Blue Water Navy and its Implication for the Region

Authors: Riddhi Chopra

Abstract:

Alfred Thayer Mahan described the sea as a ‘great common,’ which would serve as a medium for communication, trade, and transport. The seas of Asia are witnessing an intriguing historical anomaly – rise of an indigenous maritime power against the backdrop of US domination over the region. As China transforms from an inward leaning economy to an outward-leaning economy, it has become increasingly dependent on the global sea; as a result, we witness an evolution in its maritime strategy from near seas defense to far seas deployment strategies. It is not only patrolling the international waters but has also built a network of civilian and military infrastructure across the disputed oceanic expanse. The paper analyses the reorientation of China from a naval power to a blue water navy in an era of extensive globalisation. The actions of the Chinese have created a zone of high alert amongst its neighbors such as Japan, Philippines, Vietnam and North Korea. These nations are trying to align themselves so as to counter China’s growing brinkmanship, but China has been pursuing claims through a carefully calibrated strategy in the region shunning any coercive measures taken by other forces. If China continues to expand its maritime boundaries, its neighbors – all smaller and weaker Asian nations would be limited to a narrow band of the sea along its coastlines. Hence it is essential for the US to intervene and support its allies to offset Chinese supremacy. The paper intends to provide a profound analysis over the disputes in South China Sea and East China Sea focusing on Philippines and Japan respectively. Moreover, the paper attempts to give an account of US involvement in the region and its alignment with its South Asian allies. The geographic dynamics is said the breed a national coalition dominating the strategic ambitions of China as well as the weak littoral states. China has conducted behind the scenes diplomacy trying to persuade its neighbors to support its position on the territorial disputes. These efforts have been successful in creating fault lines in ASEAN thereby undermining regional integrity to reach a consensus on the issue. Chinese diplomatic efforts have also forced the US to revisit its foreign policy and engage with players like Cambodia and Laos. The current scenario in the SCS points to a strong Chinese hold trying to outspace all others with no regards to International law. Chinese activities are in contrast with US principles like Freedom of Navigation thereby signaling US to take bold actions to prevent Chinese hegemony in the region. The paper ultimately seeks to explore the changing power dynamics among various claimants where a rival superpower like US can pursue the traditional policy of alliance formation play a decisive role in changing the status quo in the arena, consequently determining the future trajectory.

Keywords: China, East China Sea, South China Sea, USA

Procedia PDF Downloads 241
852 Experimental Investigation of Absorbent Regeneration Techniques to Lower the Cost of Combined CO₂ and SO₂ Capture Process

Authors: Bharti Garg, Ashleigh Cousins, Pauline Pearson, Vincent Verheyen, Paul Feron

Abstract:

The presence of SO₂ in power plant flue gases makes flue gas desulfurization (FGD) an essential requirement prior to post combustion CO₂ (PCC) removal facilities. Although most of the power plants worldwide deploy FGD in order to comply with environmental regulations, generally the achieved SO₂ levels are not sufficiently low for the flue gases to enter the PCC unit. The SO₂ level in the flue gases needs to be less than 10 ppm to effectively operate the PCC installation. The existing FGD units alone cannot bring down the SO₂ levels to or below 10 ppm as required for CO₂ capture. It might require an additional scrubber along with the existing FGD unit to bring the SO₂ to the desired levels. The absence of FGD units in Australian power plants brings an additional challenge. SO₂ concentrations in Australian power station flue gas emissions are in the range of 100-600 ppm. This imposes a serious barrier on the implementation of standard PCC technologies in Australia. CSIRO’s developed CS-Cap process is a unique solution to capture SO₂ and CO₂ in a single column with single absorbent which can potentially bring cost-effectiveness to the commercial deployment of carbon capture in Australia, by removing the need for FGD. Estimated savings of removing SO₂ through a similar process as CS-Cap is around 200 MMUSD for a 500 MW Australian power plant. Pilot plant trials conducted to generate the proof of concept resulted in 100% removal of SO₂ from flue gas without utilising standard limestone-based FGD. In this work, removal of absorbed sulfur from aqueous amine absorbents generated in the pilot plant trials has been investigated by reactive crystallisation and thermal reclamation. More than 95% of the aqueous amines can be reclaimed back from the sulfur loaded absorbent via reactive crystallisation. However, the recovery of amines through thermal reclamation is limited and depends on the sulfur loading on the spent absorbent. The initial experimental work revealed that reactive crystallisation is a better fit for CS-Cap’s sulfur-rich absorbent especially when it is also capable of generating K₂SO₄ crystals of highly saleable quality ~ 99%. Initial cost estimation carried on both the technologies resulted in almost similar capital expenditure; however, the operating cost is considerably higher in thermal reclaimer than that in crystalliser. The experimental data generated in the laboratory from both the regeneration techniques have been used to generate the simulation model in Aspen Plus. The simulation model illustrates the economic benefits which could be gained by removing flue gas desulfurization prior to standard PCC unit and replacing it with a CS-Cap absorber column co-capturing CO₂ and SO₂, and it's absorbent regeneration system which would be either reactive crystallisation or thermal reclamation.

Keywords: combined capture, cost analysis, crystallisation, CS-Cap, flue gas desulfurisation, regeneration, sulfur, thermal reclamation

Procedia PDF Downloads 128
851 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 212
850 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 69
849 A Failure to Strike a Balance: The Use of Parental Mediation Strategies by Foster Carers and Social Workers

Authors: Jennifer E Simpson

Abstract:

Background and purpose: The ubiquitous use of the Internet and social media by children and young people has had a dual effect. The first is to open a world of possibilities and promise that is characterized by the ability to consume and create content, connect with friends, explore and experiment. The second relates to risks such as unsolicited requests, sexual exploitation, cyberbullying and commercial exploitation. This duality poses significant difficulties for a generation of foster carers and social workers who have no childhood experience to draw on in terms of growing up using the Internet, social media and digital devices. This presentation is concerned with the findings of a small qualitative study about the use of digital devices and the Internet by care-experienced young people to stay in touch with their families and the way this was managed by foster carers and social workers using specific parental mediation strategies. The findings highlight that restrictive strategies were used by foster carers and endorsed by social workers. An argument is made for an approach that develops a series of balanced solutions that move foster carers from such restrictive approaches to those that are grounded in co-use and are interpretive in nature. Methods: Using a purposive sampling strategy, 12 triads consisting of care-experienced young people (aged 13-18 years), their foster carers and allocated social workers were recruited. All respondents undertook a semi-structured interview, with the young people detailing what social media apps and other devices they used to contact their families via an Ecomap. The foster carers and social workers shared details of the methods and approaches they used to manage digital devices and the Internet in general. Data analysis was performed using a Framework analytic method to explore the various attitudes, as well as complementary and contradictory perspectives of the young people, their foster carers and allocated social workers. Findings: The majority of foster carers made use of parental mediation strategies that erred on the side of typologies that included setting rules and regulations (restrictive), ad-hoc checking of a young person’s behavior and device (monitoring), and software used to limit or block access to inappropriate websites (technical). It was noted that minimal use was made by foster carers of parental mediation strategies that included talking about content (active/interpretive) or sharing Internet activities (co-use). Amongst the majority of the social workers, they also had a strong preference for restrictive approaches. Conclusions and implications: Trepidations on the part of both foster carers and social workers about the use of digital devices and the Internet meant that the parental strategies used were weighted more towards restriction, with little use made of approaches such as co-use and interpretative. This lack of balance calls for solutions that are grounded in co-use and an interpretive approach, both of which can be achieved through training and support, as well as wider policy change.

Keywords: parental mediation strategies, risk, children in state care, online safety

Procedia PDF Downloads 74
848 Cities Under Pressure: Unraveling Urban Resilience Challenges

Authors: Sherine S. Aly, Fahd A. Hemeida, Mohamed A. Elshamy

Abstract:

In the face of rapid urbanization and the myriad challenges posed by climate change, population growth, and socio-economic disparities, fostering urban resilience has become paramount. This abstract offers a comprehensive overview of the study on "Urban Resilience Challenges," exploring the background, methodologies, major findings, and concluding insights. The paper unveils a spectrum of challenges encompassing environmental stressors and deep-seated socio-economic issues, such as unequal access to resources and opportunities. Emphasizing their interconnected nature, the study underscores the imperative for holistic and integrated approaches to urban resilience, recognizing the intricate web of factors shaping the urban landscape. Urbanization has witnessed an unprecedented surge, transforming cities into dynamic and complex entities. With this growth, however, comes an array of challenges that threaten the sustainability and resilience of urban environments. This study seeks to unravel the multifaceted urban resilience challenges, exploring their origins and implications for contemporary cities. Cities serve as hubs of economic, social, and cultural activities, attracting diverse populations seeking opportunities and a higher quality of life. However, the urban fabric is increasingly strained by climate-related events, infrastructure vulnerabilities, and social inequalities. Understanding the nuances of these challenges is crucial for developing strategies that enhance urban resilience and ensure the longevity of cities as vibrant and adaptive entities. This paper endeavors to discern strategic guidelines for enhancing urban resilience amidst the dynamic challenges posed by rapid urbanization. The study aims to distill actionable insights that can inform strategic approaches. Guiding the formulation of effective strategies to fortify cities against multifaceted pressures. The study employs a multifaceted approach to dissect urban resilience challenges. A qualitative method will be employed, including comprehensive literature reviews and data analysis of urban vulnerabilities that provided valuable insights into the lived experiences of resilience challenges in diverse urban settings. In conclusion, this study underscores the urgency of addressing urban resilience challenges to ensure the sustained vitality of cities worldwide. The interconnected nature of these challenges necessitates a paradigm shift in urban planning and governance. By adopting holistic strategies that integrate environmental, social, and economic considerations, cities can navigate the complexities of the 21st century. The findings provide a roadmap for policymakers, planners, and communities to collaboratively forge resilient urban futures that withstand the challenges of an ever-evolving urban landscape.

Keywords: resilient principles, risk management, sustainable cities, urban resilience

Procedia PDF Downloads 55
847 Improving Ghana's Oil Industry Through Integrated Operations

Authors: Esther Simpson, Evans Addo Tetteh

Abstract:

One of the most important sectors in Ghana’s economy is the oil and gas sector. Effective supply chain management is required to ensure the timely delivery of these products to the end users, given the rise in nationwide demand for petroleum products. Contrarily, freight forwarding plays a crucial role in facilitating intra- and intra-country trade, particularly the movement of oil goods. Nevertheless, there has not been enough scientific study done on how marketing, supply chain management, and freight forwarding are integrated in the oil business. By highlighting possible areas for development in the supply chain management of petroleum products, this article seeks to close this gap. The study was predominantly qualitative and featured semi-structured interviews with influential figures in the oil and gas sector, such as marketers, distributors, freight forwarders, and regulatory organizations. The purpose of the interviews was to determine the difficulties and possibilities for enhancing the management of the petroleum products supply chain. Thematic analysis was used to examine the data obtained in order to find patterns and themes that arose. The findings from the study revealed that the oil sector faced a number of issues in terms of supply chain management. Inadequate infrastructure, insufficient storage facilities, a lack of cooperation among parties, and an inadequate regulatory framework were among the obstacles. Furthermore, the study indicated significant prospects for enhancing petroleum product supply chain management, such as the integration of more advanced digital technologies, the formation of strategic alliances, and the adoption of sustainable practices in petroleum product supply chain management. The study's conclusions have far-reaching ramifications for the oil and gas sector, freight forwarding, and Ghana’s economy as a whole. Marketing, supply chain management, and freight forwarding has high prospects from being integrated to improve the efficiency of the petroleum product supply chain, resulting in considerable cost savings for the industry. Furthermore, the use of sustainable practices will improve the industry's sustainability and lessen the environmental effect of the petroleum product supply chain. Based on the findings, we propose that stakeholders in Ghana’s oil and gas sector work together and collaborate to enhance petroleum supply chain management. This collaboration should include the use of digital technologies, the formation of strategic alliances, and the implementation of sustainable practices. Moreover, we urge that governments establish suitable rules to guarantee the efficient and sustainable management of petroleum product supply chains. In conclusion, the integration and combination of marketing, supply chain management, and freight forwarding in the oil business gives a tremendous opportunity for enhancing petroleum product supply chain management. The study's conclusions have far-reaching ramifications for the sector, freight forwarding, and the economy as a whole. Using sustainable practices, integrating digital technology, and forming strategic alliances will improve the efficiency and sustainability of the petroleum product supply chain. We expect that this conference paper will encourage more study and collaboration among oil and gas sector stakeholders to improve petroleum supply chain management.

Keywords: collaboration, logistics, sustainability, supply chain management

Procedia PDF Downloads 81
846 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries

Authors: Behzad Babaei, B. Gangadhara Prusty

Abstract:

The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.

Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress

Procedia PDF Downloads 102
845 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy

Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther

Abstract:

Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.

Keywords: concrete, damage assessment, harmful substances, LIBS

Procedia PDF Downloads 176
844 Catalytic Ammonia Decomposition: Cobalt-Molybdenum Molar Ratio Effect on Hydrogen Production

Authors: Elvis Medina, Alejandro Karelovic, Romel Jiménez

Abstract:

Catalytic ammonia decomposition represents an attractive alternative due to its high H₂ content (17.8% w/w), a product stream free of COₓ, among others; however, challenges need to be addressed for its consolidation as an H₂ chemical storage technology, especially, those focused on the synthesis of efficient bimetallic catalytic systems, as an alternative to the price and scarcity of ruthenium, the most active catalyst reported. In this sense, from the perspective of rational catalyst design, adjusting the main catalytic activity descriptor, a screening of supported catalysts with different compositional settings of cobalt-molybdenum metals is presented to evaluate their effect on the catalytic decomposition rate of ammonia. Subsequently, a kinetic study on the supported monometallic Co and Mo catalysts, as well as on the bimetallic CoMo catalyst with the highest activity is shown. The synthesis of catalysts supported on γ-alumina was carried out using the Charge Enhanced Dry Impregnation (CEDI) method, all with a 5% w/w loading metal. Seeking to maintain uniform dispersion, the catalysts were oxidized and activated (In-situ activation) using a flow of anhydrous air and hydrogen, respectively, under the same conditions: 40 ml min⁻¹ and 5 °C min⁻¹ from room temperature to 600 °C. Catalytic tests were carried out in a fixed-bed reactor, confirming the absence of transport limitations, as well as an Approach to equilibrium (< 1 x 10⁻⁴). The reaction rate on all catalysts was measured between 400 and 500 ºC at 53.09 kPa NH3. The synergy theoretically (DFT) reported for bimetallic catalysts was confirmed experimentally. Specifically, it was observed that the catalyst composed mainly of 75 mol% cobalt proved to be the most active in the experiments, followed by the monometallic cobalt and molybdenum catalysts, in this order of activity as referred to in the literature. A kinetic study was performed at 10.13 – 101.32 kPa NH3 and at four equidistant temperatures between 437 and 475 °C the data were adjusted to an LHHW-type model, which considered the desorption of nitrogen atoms from the active phase surface as the rate determining step (RDS). The regression analysis were carried out under an integral regime, using a minimization algorithm based on SLSQP. The physical meaning of the parameters adjusted in the kinetic model, such as the RDS rate constant (k₅) and the lumped adsorption constant of the quasi-equilibrated steps (α) was confirmed through their Arrhenius and Van't Hoff-type behavior (R² > 0.98), respectively. From an energetic perspective, the activation energy for cobalt, cobalt-molybdenum, and molybdenum was 115.2, 106.8, and 177.5 kJ mol⁻¹, respectively. With this evidence and considering the volcano shape described by the ammonia decomposition rate in relation to the metal composition ratio, the synergistic behavior of the system is clearly observed. However, since characterizations by XRD and TEM were inconclusive, the formation of intermetallic compounds should be still verified using HRTEM-EDS. From this point onwards, our objective is to incorporate parameters into the kinetic expressions that consider both compositional and structural elements and explore how these can maximize or influence H₂ production.

Keywords: CEDI, hydrogen carrier, LHHW, RDS

Procedia PDF Downloads 61
843 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 305
842 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India

Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria

Abstract:

Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.

Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph

Procedia PDF Downloads 328
841 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation

Authors: Bill D. Geis

Abstract:

Suicide wrongful death forensic cases are the fastest rising tort in mental health law. It is estimated that suicide-related cases have accounted for 15% of U.S. malpractice claims since 2006. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from U.S. state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. Research in recent years, however, has indicated that the majority of persons who end their lives do not say they are suicidal at their last medical or psychiatric contact. Near-term risk assessment—that goes beyond verbalized suicide ideation—is needed. Our previous research employed structural equation modeling to predict lethal suicide risk--eight negative thought patterns (feeling like a burden on others, hopelessness, self-hatred, etc.) mediated by nine transdiagnostic clinical factors (mental torment, insomnia, substance abuse, PTSD intrusions, etc.) were combined to predict acute lethal suicide risk. This structural equation model, the Lethal Suicide Risk Pattern (LSRP), Acute model, had excellent goodness-of-fit [χ2(df) = 94.25(47)***, CFI = .98, RMSEA = .05, .90CI = .03-.06, p(RMSEA = .05) = .63. AIC = 340.25, ***p < .001.]. A further SEQ analysis was completed for this paper, adding a measure of Acute Suicide Ideation to the previous SEQ. Acceptable prediction model fit was no longer achieved [χ2(df) = 3.571, CFI > .953, RMSEA = .075, .90% CI = .065-.085, AIC = 529.550].This finding suggests that, in this additional study, immediate verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The LSRP and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training. Without this training, the standard of care for suicide assessment is out of sync with current research—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.

Keywords: forensic evaluation, standard of care, suicide, suicide assessment, wrongful death

Procedia PDF Downloads 69
840 Navigate the Labyrinth of Leadership: Leaders’ Experiences in Saudi Higher Education

Authors: Laila Albughayl

Abstract:

The purpose of this qualitative case study was to explore Saudi females’ leadership journeys as they navigate the labyrinth of leadership in higher education. To gain a better understanding of how these leaders overcame challenges and accessed support as they progressed through the labyrinth to top positions in Saudi higher education. The significance of this research derived from the premise that leaders need to acquire essential leadership competencies such as knowledge, skills, and practices to effectively lead through economic transformation, growing globalism, and rapidly developing technology in an increasingly diverse world. In addition, understanding Saudi women’s challenges in the labyrinth will encourage policymakers to improve the situation under which these women work. The metaphor ‘labyrinth’ for Eagly and Carli (2007) encapsulates the winding paths, dead ends, and maze-like pathways that are full of challenges and supports that women traverse to access and maintain leadership positions was used. In this study, ‘labyrinth’ was used as the conceptual framework to explore women leaders’ challenges and opportunities in leadership in Saudi higher education. A proposed model for efficient navigation of the labyrinth of leadership was used. This model focused on knowledge, skills, and behaviours (KSB) as the analytical framework for examining responses to the research questions. This research was conducted using an interpretivist qualitative approach. A case study was the methodology used. Semi-structured interviews were the main data collection method. Purposive sampling was used to select ten Saudi leaders in three public universities. In coding, the 6-step framework of thematic analysis for Braun and Clarke was used to identify, analyze, and report themes within the data. NVivo software was also used as a tool to assist with managing and organizing the data. The resultant findings showed that the challenges identified by participants in navigating the labyrinth of leadership in Saudi higher education replicated some of those identified in the literature. The onset findings also revealed that the organizational barriers in Saudi higher education came as the top hindrance to women’s advancement in the labyrinth of leadership, followed by societal barriers. The findings also showed that women’s paths in the labyrinth of leadership in higher education were still convoluted and tedious compared to their male counterparts. In addition, the findings revealed that Saudi women leaders use significant strategies to access leadership posts and effectively navigate the labyrinth; this was not indicated in the literature. In addition, the resultant findings revealed that there are keys that assisted Saudi female leaders in effectively navigating the labyrinth of leadership. For example, the findings indicated that spirituality (religion) was a powerful key that enabled Saudi women leaders to pursue and persist in their leadership paths. Based on participants' experiences, a compass for effective navigation of the labyrinth of leadership in higher education was created for current and aspirant Saudi women leaders to follow. Finally, the findings had several significant implications for practice, policy, theory, and future research.

Keywords: women, leadership, labyrinth, higher education

Procedia PDF Downloads 84
839 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches

Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli

Abstract:

Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).

Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR

Procedia PDF Downloads 110
838 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements

Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang

Abstract:

Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.

Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure

Procedia PDF Downloads 114
837 Validating Chronic Kidney Disease-Specific Risk Factors for Cardiovascular Events Using National Data: A Retrospective Cohort Study of the Nationwide Inpatient Sample

Authors: Fidelis E. Uwumiro, Chimaobi O. Nwevo, Favour O. Osemwota, Victory O. Okpujie, Emeka S. Obi, Omamuyovbi F. Nwoagbe, Ejiroghene Tejere, Joycelyn Adjei-Mensah, Christopher N. Ekeh, Charles T. Ogbodo

Abstract:

Several risk factors associated with cardiovascular events have been identified as specific to Chronic Kidney Disease (CKD). This study endeavors to validate these CKD-specific risk factors using up-to-date national-level data, thereby highlighting the crucial significance of confirming the validity and generalizability of findings obtained from previous studies conducted on smaller patient populations. The study utilized the nationwide inpatient sample database to identify adult hospitalizations for CKD from 2016 to 2020, employing validated ICD-10-CM/PCS codes. A comprehensive literature review was conducted to identify both traditional and CKD-specific risk factors associated with cardiovascular events. Risk factors and cardiovascular events were defined using a combination of ICD-10-CM/PCS codes and statistical commands. Only risk factors with specific ICD-10 codes and hospitalizations with complete data were included in the study. Cardiovascular events of interest included cardiac arrhythmias, sudden cardiac death, acute heart failure, and acute coronary syndromes. Univariate and multivariate regression models were employed to evaluate the association between chronic kidney disease-specific risk factors and cardiovascular events while adjusting for the impact of traditional CV risk factors such as old age, hypertension, diabetes, hypercholesterolemia, inactivity, and smoking. A total of 690,375 hospitalizations for CKD were included in the analysis. The study population was predominantly male (375,564, 54.4%) and primarily received care at urban teaching hospitals (512,258, 74.2%). The mean age of the study population was 61 years (SD 0.1), and 86.7% (598,555) had a CCI of 3 or more. At least one traditional risk factor for CV events was present in 84.1% of all hospitalizations (580,605), while 65.4% (451,505) included at least one CKD-specific risk factor for CV events. The incidence of CV events in the study was as follows: acute coronary syndromes (41,422; 6%), sudden cardiac death (13,807; 2%), heart failure (404,560; 58.6%), and cardiac arrhythmias (124,267; 18%). 91.7% (113,912) of all cardiac arrhythmias were atrial fibrillations. Significant odds of cardiovascular events on multivariate analyses included: malnutrition (aOR: 1.09; 95% CI: 1.06–1.13; p<0.001), post-dialytic hypotension (aOR: 1.34; 95% CI: 1.26–1.42; p<0.001), thrombophilia (aOR: 1.46; 95% CI: 1.29–1.65; p<0.001), sleep disorder (aOR: 1.17; 95% CI: 1.09–1.25; p<0.001), and post-renal transplant immunosuppressive therapy (aOR: 1.39; 95% CI: 1.26–1.53; p<0.001). The study validated malnutrition, post-dialytic hypotension, thrombophilia, sleep disorders, and post-renal transplant immunosuppressive therapy, highlighting their association with increased risk for cardiovascular events in CKD patients. No significant association was observed between uremic syndrome, hyperhomocysteinemia, hyperuricemia, hypertriglyceridemia, leptin levels, carnitine deficiency, anemia, and the odds of experiencing cardiovascular events.

Keywords: cardiovascular events, cardiovascular risk factors in CKD, chronic kidney disease, nationwide inpatient sample

Procedia PDF Downloads 82
836 Screening of Osteoporosis in Aging Populations

Authors: Massimiliano Panella, Sara Bortoluzzi, Sophia Russotto, Daniele Nicolini, Carmela Rinaldi

Abstract:

Osteoporosis affects more than 200 million people worldwide. About 75% of osteoporosis cases are undiagnosed or diagnosed only when a bone fracture occurs. Since osteoporosis related fractures are significant determinants of the burden of disease and health and social costs of aging populations, we believe that this is the early identification and treatment of high-risk patients should be a priority in actual healthcare systems. Screening for osteoporosis by dual energy x-ray absorptiometry (DEXA) is not cost-effective for general population. An alternative is pulse-echo ultrasound (PEUS) because of the minor costs. To this end, we developed an early detection program for osteoporosis with PEUS, and we evaluated is possible impact and sustainability. We conducted a cross-sectional study including 1,050 people in Italy. Subjects with >1 major or >2 minor risk factors for osteoporosis were invited to PEUS bone mass density (BMD) measurement at the proximal tibia. Based on BMD values, subjects were classified as healthy subjects (BMD>0.783 g/cm²) and pathological including subjects with suspected osteopenia (0.783≤BMD>0.719 g/cm²) or osteoporosis (BMD ≤ 0.719 g/cm²). The responder rate was 60.4% (634/1050). According to the risk, PEUS scan was recommended to 436 people, of whom 300 (mean age 45.2, 81% women) accepted to participate. We identified 240 (80%) healthy and 60 (20%) pathological subjects (47 osteopenic and 13 osteoporotic). We observed a significant association between high risk people and reduced bone density (p=0.043) with increased risks for female gender, older ages, and menopause (p<0.01). The yearly cost of the screening program was 8,242 euros. With actual Italian fracture incidence rates in osteoporotic patients, we can reasonably expect in 20 years that at least 6 fractures will occur in our sample. If we consider that the mean costs per fracture in Italy is today 16,785 euros, we can estimate a theoretical cost of 100,710 euros. According to literature, we can assume that the early treatment of osteoporosis could avoid 24,170 euros of such costs. If we add the actual yearly cost of the treatments to the cost of our program and we compare this final amount of 11,682 euros to the avoidable costs of fractures (24,170 euros) we can measure a possible positive benefits/costs ratio of 2.07. As a major outcome, our study let us to early identify 60 people with a significant bone loss that were not aware of their condition. This diagnostic anticipation constitutes an important element of value for the project, both for the patients, for the preventable negative outcomes caused by the fractures, and for the society in general, because of the related avoidable costs. Therefore, based on our finding, we believe that the PEUS based screening performed could be a cost-effective approach to early identify osteoporosis. However, our study has some major limitations. In fact, in our study the economic analysis is based on theoretical scenarios, thus specific studies are needed for a better estimation of the possible benefits and costs of our program.

Keywords: osteoporosis, prevention, public health, screening

Procedia PDF Downloads 123