Search results for: clearance rates
2412 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs
Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel
Abstract:
Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management
Procedia PDF Downloads 1642411 Characterising Rates of Renal Dysfunction and Sarcoidosis in Patients with Elevated Serum Angiotensin-Converting Enzyme
Authors: Fergal Fouhy, Alan O’Keeffe, Sean Costelloe, Michael Clarkson
Abstract:
Background: Sarcoidosis is a systemic, non-infectious disease of unknown aetiology, characterized by non-caseating granulomatous inflammation. The lung is most often affected (90%); however, the condition can affect all organs, including the kidneys. There is limited evidence describing the incidence and characteristics of renal involvement in sarcoidosis. Serum angiotensin-converting enzyme (ACE) is a recognised biomarker used in the diagnosis and monitoring of sarcoidosis. Methods: A single-centre, retrospective cohort study of patients presenting to Cork University Hospital (CUH) in 2015 with first-time elevations of serum ACE was performed. This included an initial database review of ACE and other biochemistry results, followed by a medical chart review to confirm the presence or absence of sarcoidosis and management thereof. Acute kidney injury (AKI) was staged using the AKIN criteria, and chronic kidney disease (CKD) was staged using the KDIGO criteria. Follow-up was assessed over five years tracking serum creatinine, serum calcium, and estimated glomerular filtration rates (eGFR). Results: 119 patients were identified as having a first raised serum ACE in 2015. Seventy-nine male patients and forty female patients were identified. The mean age of patients identified was 47 years old. 11% had CKD at baseline. 18% developed an AKI at least once within the next five years. A further 6% developed CKD during this time period. 13% developed hypercalcemia. The patients within the lowest quartile of serums ACE had an incidence of sarcoidosis of 5%. None of this group developed hypercalcemia, 23% developed AKI, and 7% developed CKD. Of the patients with a serum ACE in the highest quartile, almost all had documented diagnoses of sarcoidosis with an incidence of 96%. 3% of this group developed hypercalcemia, 13% AKI and 3% developed CKD. Conclusions: There was an unexpectedly high incidence of AKI in patients who had a raised serum ACE. Not all patients with a raised serum ACE had a confirmed diagnosis of sarcoidosis. There does not appear to be a relationship between increased serum ACE levels and increased incidence of hypercalcaemia, AKI, and CKD. Ideally, all patients should have biopsy-proven sarcoidosis. This is an initial study that should be replicated with larger numbers and including multiple centres.Keywords: sarcoidosis, acute kidney injury, chronic kidney disease, hypercalcemia
Procedia PDF Downloads 1042410 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility
Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha
Abstract:
Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.Keywords: data citation, data reuse, research data sharing, webometrics
Procedia PDF Downloads 1782409 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 1352408 Typification and Determination of Antibiotic Susceptibility Profiles with E Test Methods of Anaerobic Gram Negative Bacilli Isolated from Various Clinical Specimen
Authors: Cengiz Demir, Recep Keşli, Gülşah Aşık
Abstract:
Objective: This study was carried out with the purpose of defining by using the E test method and determining the antibiotic resistance profiles of Gram-negative anaerobic bacilli isolated from various clinical specimens obtained from patients with suspected anaerobic infections and referred to Medical Microbiology Laboratory of Afyon Kocatepe University, ANS Application and Research Hospital. Methods: Two hundred and seventy eight clinical specimens were examined for isolation of the anaerobic bacteria in Medical Microbiology Laboratory between the 1st November 2014 and 30th October 2015. Specimens were cultivated by using Scheadler agar that 5% defibrinated sheep blood added, and Scheadler broth. The isolated anaerobic Gram-negative bacilli were identified conventional methods and Vitek 2 (ANC ID Card, bioMerieux, France) cards. Antibiotic resistance rates against to penicillin G, clindamycin, cefoxitin, metronidazole, moxifloxacin, imipenem, meropenem, ertapenem and doripenem were determined with E-test method for each isolate. Results: Of the isolated twenty-eight anaerobic gram negative bacilli fourteen were identified as the B. fragilis group, 9 were Prevotella group, and 5 were Fusobacterium group. The highest resistance rate was found against penicillin (78.5%) and resistance rates against clindamycin and cefoxitin were found as 17.8% and 21.4%, respectively. Against to the; metronidazole, moxifloxacin, imipenem, meropenem, ertapenem and doripenem, no resistance was found. Conclusion: Since high rate resistance has been detected against to penicillin in the study penicillin should not be preferred in empirical treatment. Cefoxitin can be preferred in empirical treatment; however, carrying out the antibiotic sensitivity testing will be more proper and beneficial. No resistance was observed against carbapenem group antibiotics and metronidazole; so that reason, these antibiotics should be reserved for treatment of infectious caused by resistant strains in the future.Keywords: anaerobic gram-negative bacilli, anaerobe, antibiotics and resistance profiles, e-test method
Procedia PDF Downloads 3052407 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors
Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson
Abstract:
The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.Keywords: cooking, indoor air quality, low-cost sensor, ventilation
Procedia PDF Downloads 1132406 A Meta-Analysis of the Academic Achievement of Students With Emotional/Behavioral Disorders in Traditional Public Schools in the United States
Authors: Dana Page, Erica McClure, Kate Snider, Jenni Pollard, Tim Landrum, Jeff Valentine
Abstract:
Extensive research has been conducted on students with emotional and behavioral disorders (EBD) and their rates of challenging behavior. In the past, however, less attention has been given to their academic achievement and outcomes. Recent research examining outcomes for students with EBD has indicated that these students receive lower grades, are less likely to pass classes, and experience higher rates of school dropout than students without disabilities and students with other high incidence disabilities. Given that between 2% and 20% of the school-age population is likely to have EBD (though many may not be identified as such), this is no small problem. Despite the need for increased examination of this population’s academic achievement, research on the actual performance of students with EBD has been minimal. This study reports the results of a meta-analysis of the limited research examining academic achievement of students with EBD, including effect sizes of assessment scores and discussion of moderators potentially impacting academic outcomes. Researchers conducted a thorough literature search to identify potentially relevant documents before screening studies for inclusion in the systematic review. Screening identified 35 studies that reported results of academic assessment scores for students with EBD. These studies were then coded to extract descriptive data across multiple domains, including placement of students, participant demographics, and academic assessment scores. Results indicated possible collinearity between EBD disability status and lower academic assessment scores, despite a lack of association between EBD eligibility and lower cognitive ability. Quantitative analysis of assessment results yielded effect sizes for academic achievement of student participants, indicating lower performance levels and potential moderators (e.g., race, socioeconomic status, and gender) impacting student academic performance. In addition to discussing results of the meta-analysis, implications and areas for future research, policy, and practice are discussed.Keywords: students with emotional behavioral disorders, academic achievement, systematic review, meta-analysis
Procedia PDF Downloads 692405 Effects of Hydraulic Loading Rates and Porous Matrix in Constructed Wetlands for Wastewater Treatment
Authors: Li-Jun Ren, Wei Pan, Li-Li Xu, Shu-Qing An
Abstract:
This study evaluated whether different matrix composition volume ratio can improve water quality in the experiment. The mechanism and adsorption capability of wetland matrixes (oyster shell, coarse slag, and volcanic rock) and their different volume ratio in group configuration during pollutants removal processes were tested. When conditions unchanged, the residence time affects the reaction effect. The average removal efficiencies of four kinds of matrix volume ratio on the TN were 62.76%, 61.54%, 64.13%, and 55.89%, respectively.Keywords: hydraulic residence time, matrix composition, removal efficiency, volume ratio
Procedia PDF Downloads 3292404 Lung Icams and Vcam-1 in Innate and Adaptive Immunity to Influenza Infections: Implications for Vaccination Strategies
Authors: S. Kozlovski, S.W. Feigelson, R. Alon
Abstract:
The b2 integrin ligands ICAM-1 ICAM-2 and the endothelial VLA-4 integrin ligand VCAM-1 are constitutively expressed on different lung vessels and on high endothelial venules (HEVs), the main portal for lymphocyte entry from the blood into lung draining lymph nodes. ICAMs are also ubiquitously expressed by many antigen-presenting leukocytes and have been traditionally suggested as critical for the various antigen-specific immune synapses generated by these distinct leukocytes and specific naïve and effector T cells. Loss of both ICAM-1 and ICAM-2 on the lung vasculature reduces the ability to patrol monocytes and Tregs to patrol the lung vasculature at a steady state. Our new findings suggest, however, that in terms of innate leukocyte trafficking into the lung lamina propria, both constitutively expressed and virus-induced vascular VCAM-1 can functionally compensate for the loss of these ICAMs. In a mouse model for influenza infection, neutrophil and NK cell recruitment and clearance of influenza remained normal in mice deficient in both ICAMs. Strikingly, mice deficient in both ICAMs also mounted normal influenza-specific CD8 proliferation and differentiation. In addition, these mice normally combated secondary influenza infection, indicating that the presence of ICAMs on conventional dendritic cells (cDCs) that present viral antigens are not required for immune synapse formation between these APCs and naïve CD8 T cells as previously suggested. Furthermore, long-lasting humoral responses critical for protection from a secondary homosubtypic influenza infection were also normal in mice deficient in both ICAM-1 and ICAM-2. Collectively, our results suggest that the expression of ICAM-1 and ICAM-2 on lung endothelial and epithelial cells, as well as on DCs and B cells, is not critical for the generation of innate or adaptive anti-viral immunity in the lungs. Our findings also suggest that endothelial VCAM-1 can substitute for the functions of vascular ICAMs in leukocyte trafficking into various lung compartments.Keywords: emigration, ICAM-1, lymph nodes, VCAM-1
Procedia PDF Downloads 1282403 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?
Authors: Ruth Hegarty, Noel Connaughton
Abstract:
Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency
Procedia PDF Downloads 3072402 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure
Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff
Abstract:
Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics
Procedia PDF Downloads 5262401 Untangling the Greek Seafood Market: Authentication of Crustacean Products Using DNA-Barcoding Methodologies
Authors: Z. Giagkazoglou, D. Loukovitis, C. Gubili, A. Imsiridou
Abstract:
Along with the increase in human population, demand for seafood has increased. Despite the strict labeling regulations that exist for most marketed species in the European Union, seafood substitution remains a persistent global issue. Food fraud occurs when food products are traded in a false or misleading way. Mislabeling occurs when one species is substituted and traded under the name of another, and it can be intentional or unintentional. Crustaceans are one of the most regularly consumed seafood in Greece. Shrimps, prawns, lobsters, crayfish, and crabs are considered a delicacy and can be encountered in a variety of market presentations (fresh, frozen, pre-cooked, peeled, etc.). With most of the external traits removed, products as such are susceptible to species substitution. DNA barcoding has proven to be the most accurate method for the detection of fraudulent seafood products. To our best knowledge, the DNA barcoding methodology is used for the first time in Greece, in order to investigate the labeling practices for crustacean products available in the market. A total of 100 tissue samples were collected from various retailers and markets across four Greek cities. In an effort to cover the highest range of products possible, different market presentations were targeted (fresh, frozen and cooked). Genomic DNA was extracted using the DNeasy Blood & Tissue Kit, according to the manufacturer's instructions. The mitochondrial gene selected as the target region of the analysis was the cytochrome c oxidase subunit I (COI). PCR products were purified and sequenced using an ABI 3500 Genetic Analyzer. Sequences were manually checked and edited using BioEdit software and compared against the ones available in GenBank and BOLD databases. Statistical analyses were conducted in R and PAST software. For most samples, COI amplification was successful, and species-level identification was possible. The preliminary results estimate moderate mislabeling rates (25%) in the identified samples. Mislabeling was most commonly detected in fresh products, with 50% of the samples in this category labeled incorrectly. Overall, the mislabeling rates detected by our study probably relate to some degree of unintentional misidentification, and lack of knowledge surrounding the legal designations by both retailers and consumers. For some species of crustaceans (i.e. Squila mantis) the mislabeling appears to be also affected by the local labeling practices. Across Greece, S. mantis is sold in the market under two common names, but only one is recognized by the country's legislation, and therefore any mislabeling is probably not profit-motivated. However, the substitution of the speckled shrimp (Metapenaus monoceros) for the distinct, giant river prawn (Macrobranchium rosenbergii), is a clear example of deliberate fraudulent substitution, aiming for profit. To our best knowledge, no scientific study investigating substitution and mislabeling rates in crustaceans has been conducted in Greece. For a better understanding of Greece's seafood market, similar DNA barcoding studies in other regions with increased touristic importance (e.g., the Greek islands) should be conducted. Regardless, the expansion of the list of species-specific designations for crustaceans in the country is advised.Keywords: COI gene, food fraud, labelling control, molecular identification
Procedia PDF Downloads 672400 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk
Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih
Abstract:
In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM
Procedia PDF Downloads 3162399 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures
Authors: Nicky Wilson, Graeme Ralph
Abstract:
Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories
Procedia PDF Downloads 792398 Study on Control Techniques for Adaptive Impact Mitigation
Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty
Abstract:
Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber
Procedia PDF Downloads 912397 Maximizing Giant Prawn Resource Utilization in Banjar Regency, Indonesia: A CPUE and MSY Analysis
Authors: Ahmadi, Iriansyah, Raihana Yahman
Abstract:
The giant freshwater prawn (Macrobrachium rosenbergii de Man, 1879) is a valuable species for fisheries and aquaculture, especially in Southeast Asia, including Indonesia due to their high market demand and potential for export. The growing demand for prawns is straining the sustainability of the Banjar Regency fishery. To ensure the long-term sustainability and economic viability of the prawn fishing in this region, it is imperative to implement evidence-based management practices. This requires comprehensive data on the Catch per Unit Effort (CPUE), Maximum Sustainable Yield (MSY) and the current rate of prawn resource exploitation. it analyzed five years of prawn catch data (2019-2023) obtained from South Kalimantan Marine and Fisheries Services. Fishing gears (e.g. hook & line and cast net) were first standardized with Fishing Power Index, and then calculated effort and MSY. The intercept (a) and the slope (b) values of regression curve were used to estimate the catch-maximum sustainable yield (CMSY) and optimal fishing effort (Fopt) levels within the framework of the Surplus Production Model. The estimated rates of resource utilization were then compared to the criteria of The National Commission of Marine Fish Stock Assessment. The findings showed that the CPUE value peaked in 2019 at 33.48 kg/trip, while the lowest value observed in 2022 at 5.12 kg/trip. The CMSY value was estimated to be 17,396 kg/year, corresponding to the Fopt level of 1,636 trips/year. The highest utilization rate was 56.90% recorded in 2020, while the lowest rate was observed in 2021 at 46.16%. The annual utilization rates were classified as “medium”, suggesting that increasing fishing effort by 45% could potentially maximize prawn catches at an optimum level. These findings provide a baseline for sustainable fisheries management in the region.Keywords: giant prawns, CPUE, fishing power index, sustainable potential, utilization rate
Procedia PDF Downloads 162396 Epidemiology of Congenital Heart Defects in Kazakhstan: Data from Unified National Electronic Healthcare System 2014-2020
Authors: Dmitriy Syssoyev, Aslan Seitkamzin, Natalya Lim, Kamilla Mussina, Abduzhappar Gaipov, Dimitri Poddighe, Dinara Galiyeva
Abstract:
Background: Data on the epidemiology of congenital heart defects (CHD) in Kazakhstan is scarce. Therefore, the aim of this study was to describe the incidence, prevalence and all-cause mortality of patients with CHD in Kazakhstan, using national large-scale registry data from the Unified National Electronic Healthcare System (UNEHS) for the period of 2014-2020. Methods: In this retrospective cohort study, the included data pertained to all patients diagnosed with CHD in Kazakhstan and registered in UNEHS between January 2014 and December 2020. CHD was defined based on International Classification of Diseases 10th Revision (ICD-10) codes Q20-Q26. Incidence, prevalence, and all-cause mortality rates were calculated per 100,000 population. Survival analysis was performed using Cox proportional hazards regression modeling and the Kaplan-Meier method. Results: In total, 66,512 patients were identified. Among them, 59,534 (89.5%) were diagnosed with a single CHD, while 6,978 (10.5%) had more than two CHDs. The median age at diagnosis was 0.08 years (interquartile range (IQR) 0.01 – 0.66) for people with multiple CHD types and 0.39 years (IQR 0.04 – 8.38) for those with a single CHD type. The most common CHD types were atrial septal defect (ASD) and ventricular septal defect (VSD), accounting for 25.8% and 21.2% of single CHD cases, respectively. The most common multiple types of CHD were ASD with VSD (23.4%), ASD with patent ductus arteriosus (PDA) (19.5%), and VSD with PDA (17.7%). The incidence rate of CHD decreased from 64.6 to 47.1 cases per 100,000 population among men and from 68.7 to 42.4 among women. The prevalence rose from 66.1 to 334.1 cases per 100,000 population among men and from 70.8 to 328.7 among women. Mortality rates showed a slight increase from 3.5 to 4.7 deaths per 100,000 in men and from 2.9 to 3.7 in women. Median follow-up was 5.21 years (IQR 2.47 – 11.69). Male sex (HR 1.60, 95% CI 1.45 - 1.77), having multiple CHDs (HR 2.45, 95% CI 2.01 - 2.97), and living in a rural area (HR 1.32, 95% CI 1.19 - 1.47) were associated with a higher risk of all-cause mortality. Conclusion: The incidence of CHD in Kazakhstan has shown a moderate decrease between 2014 and 2020, while prevalence and mortality have increased. Male sex, multiple CHD types, and rural residence were significantly associated with a higher risk of all-cause mortality.Keywords: congenital heart defects (CHD), epidemiology, incidence, Kazakhstan, mortality, prevalence
Procedia PDF Downloads 962395 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 1682394 Ultrasound-Mediated Separation of Ethanol, Methanol, and Butanol from Their Aqueous Solutions
Authors: Ozan Kahraman, Hao Feng
Abstract:
Ultrasonic atomization (UA) is a useful technique for producing a liquid spray for various processes, such as spray drying. Ultrasound generates small droplets (a few microns in diameter) by disintegration of the liquid via cavitation and/or capillary waves, with low range velocity and narrow droplet size distribution. In recent years, UA has been investigated as an alternative for enabling or enhancing ultrasound-mediated unit operations, such as evaporation, separation, and purification. The previous studies on the UA separation of a solvent from a bulk solution were limited to ethanol-water systems. More investigations into ultrasound-mediated separation for other liquid systems are needed to elucidate the separation mechanism. This study was undertaken to investigate the effects of the operational parameters on the ultrasound-mediated separation of three miscible liquid pairs: ethanol-, methanol-, and butanol-water. A 2.4 MHz ultrasonic mister with a diameter of 18 mm and rating power of 24 W was installed on the bottom of a custom-designed cylindrical separation unit. Air was supplied to the unit (3 to 4 L/min.) as a carrier gas to collect the mist. The effects of the initial alcohol concentration, viscosity, and temperature (10, 30 and 50°C) on the atomization rates were evaluated. The alcohol concentration in the collected mist was measured with high performance liquid chromatography and a refractometer. The viscosity of the solutions was determined using a Brookfield digital viscometer. The alcohol concentration of the atomized mist was dependent on the feed concentration, feed rate, viscosity, and temperature. Increasing the temperature of the alcohol-water mixtures from 10 to 50°C increased the vapor pressure of both the alcohols and water, resulting in an increase in the atomization rates but a decrease in the separation efficiency. The alcohol concentration in the mist was higher than that of the alcohol-water equilibrium at all three temperatures. More importantly, for ethanol, the ethanol concentration in the mist went beyond the azeotropic point, which cannot be achieved by conventional distillation. Ultrasound-mediated separation is a promising non-equilibrium method for separating and purifying alcohols, which may result in significant energy reductions and process intensification.Keywords: azeotropic mixtures, distillation, evaporation, purification, seperation, ultrasonic atomization
Procedia PDF Downloads 1802393 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance
Authors: Soheila Sadeghi
Abstract:
Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.Keywords: contract variation impact, quantitative analysis, project performance, claims analysis
Procedia PDF Downloads 402392 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products
Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie
Abstract:
Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels
Procedia PDF Downloads 1792391 Parametric Inference of Elliptical and Archimedean Family of Copulas
Authors: Alam Ali, Ashok Kumar Pathak
Abstract:
Nowadays, copulas have attracted significant attention for modeling multivariate observations, and the foremost feature of copula functions is that they give us the liberty to study the univariate marginal distributions and their joint behavior separately. The copula parameter apprehends the intrinsic dependence among the marginal variables, and it can be estimated using parametric, semiparametric, or nonparametric techniques. This work aims to compare the coverage rates between an Elliptical and an Archimedean family of copulas via a fully parametric estimation technique.Keywords: elliptical copula, archimedean copula, estimation, coverage rate
Procedia PDF Downloads 662390 Indigenous Pre-Service Teacher Education: Developing, Facilitating, and Maintaining Opportunities for Retention and Graduation
Authors: Karen Trimmer, Raelene Ward, Linda Wondunna-Foley
Abstract:
Within Australian tertiary institutions, the subject of Aboriginal and Torres Strait Islander education has been a major concern for many years. Aboriginal and Torres Strait Islander teachers are significantly under-represented in Australian schools and universities. High attrition rates in teacher education and in the teaching industry have contributed to a minimal growth rate in the numbers of Aboriginal and Torres Strait Islander teachers in previous years. There was an increase of 500 Indigenous teachers between 2001 and 2008 but these numbers still only account for one percent of teaching staff in government schools who identified as Aboriginal and Torres Strait Islander Australians (Ministerial Council for Education, Early Childhood Development and Youth Affairs 2010). Aboriginal and Torres Strait Islander teachers are paramount in fostering student engagement and improving educational outcomes for Indigenous students. Increasing the numbers of Aboriginal and Torres Strait Islander teachers is also a key factor in enabling all students to develop understanding of and respect for Aboriginal and Torres Strait Islander histories, cultures, and language. An ambitious reform agenda to improve the recruitment and retention of Aboriginal and Torres Strait Islander teachers will be effective only through national collaborative action and co-investment by schools and school authorities, university schools of education, professional associations, and Indigenous leaders and community networks. Whilst the University of Southern Queensland currently attracts Indigenous students to its teacher education programs (61 students in 2013 with an average of 48 enrollments each year since 2010) there is significant attrition during pre-service training. The annual rate of exiting before graduation remains high at 22% in 2012 and was 39% for the previous two years. These participation and retention rates are consistent with other universities across Australia. Whilst aspirations for a growing number of Indigenous people to be trained as teachers is present, there is a significant loss of students during their pre-service training and within the first five years of employment as a teacher. These trends also reflect the situation where Aboriginal and Torres Strait Islander teachers are significantly under-represented, making up less than 1% of teachers in schools across Australia. Through a project conducted as part the nationally funded More Aboriginal and Torres Strait Islander Teachers Initiative (MATSITI) we aim to gain an insight into the reasons that impact Aboriginal and Torres Strait Islander student’s decisions to exit their program. Through the conduct of focus groups and interviews with two graduating cohorts of self-identified Aboriginal and Torres Strait Islander students, rich data has been gathered to gain an understanding of the barriers and enhancers to the completion of pre-service qualification and transition to teaching. Having a greater understanding of these reasons then allows the development of collaborative processes and procedures to increase retention and completion rates of new Indigenous teachers. Analysis of factors impacting on exit decisions and transitions has provided evidence to support change of practice, redesign and enhancement of relevant courses and development of policy/procedures to address identified issues.Keywords: graduation, indigenous, pre-service teacher education, retention
Procedia PDF Downloads 4712389 Performance of Different Spray Nozzles in the Application of Defoliant on Cotton Plants (Gossypium hirsutum L.)
Authors: Mohamud Ali Ibrahim, Ali Bayat, Ali Bolat
Abstract:
Defoliant spraying is an important link in the mechanized cotton harvest because adequate and uniform spraying can improve defoliation quality and reduce cotton trash content. In defoliant application, application volume and spraying technology are extremely important. In this study, the effectiveness of defoliant application to cotton plant that has come to harvest with two different application volumes and three different types of nozzles with a standard field crop sprayer was determined. Experiments were carried in two phases as field area trials and laboratory analysis. Application rates were 250 l/ha and 400 L/ha, and spraying nozzles were (1) Standard flat fan nozzle (TP8006), (2) Air induction nozzle (AI 11002-VS), and (3) Dual Pattern nozzle (AI307003VP). A tracer (BSF) and defoliant were applied to mature cotton with approximately 60% open bolls and samplings for BSF deposition and spray coverage on the cotton plant were done at two plant height (upper layer, lower layer) of plant. Before and after spraying, bolls open and leaves rate on cotton plants were calculated, and filter papers were used to detect BSF deposition, and water sensitive papers (WSP) were used to measure the coverage rate of spraying methods used. Spectrofluorophotometer was used to detect the amount of tracer deposition on targets, and an image process computer programme was used to measure coverage rate on WSP. In analysis, conclusions showed that air induction nozzle (AI 11002-VS) achieved better results than the dual pattern and standard flat fan nozzles in terms of higher depositions, coverages, and leaf defoliations, and boll opening rates. AI nozzles operating at 250 L/ha application rate provide the highest deposition and coverage rate on applications of the defoliant; in addition, BSF as an indicator of the defoliant used reached on leaf beneath in merely this spray nozzle. After defoliation boll opening rate was 85% on the 7th and 12th days after spraying and falling rate of leaves was 76% at application rate of 250 L/ha with air induction (AI1102) nozzle.Keywords: cotton defoliant, air induction nozzle, dual pattern nozzle, standard flat fan nozzle, coverage rate, spray deposition, boll opening rate, leaves falling rate
Procedia PDF Downloads 1982388 Free Fibular Flaps in Management of Sternal Dehiscence
Authors: H. N. Alyaseen, S. E. Alalawi, T. Cordoba, É. Delisle, C. Cordoba, A. Odobescu
Abstract:
Sternal dehiscence is defined as the persistent separation of sternal bones that are often complicated with mediastinitis. Etiologies that lead to sternal dehiscence vary, with cardiovascular and thoracic surgeries being the most common. Early diagnosis in susceptible patients is crucial to the management of such cases, as they are associated with high mortality rates. A recent meta-analysis of more than four hundred thousand patients concluded that deep sternal wound infections were the leading cause of mortality and morbidity in patients undergoing cardiac procedures. Long-term complications associated with sternal dehiscence include increased hospitalizations, cardiac infarctions, and renal and respiratory failures. Numerous osteosynthesis methods have been described in the literature. Surgical materials offer enough rigidity to support the sternum and can be flexible enough to allow physiological breathing movements of the chest; however, these materials fall short when managing patients with extensive bone loss, osteopenia, or general poor bone quality, for such cases, flaps offer a better closure system. Early utilization of flaps yields better survival rates compared to delayed closure or to patients treated with sternal rewiring and closed drainage. The utilization of pectoralis major flaps, rectus abdominus, and latissimus muscle flaps have all been described in the literature as great alternatives. Flap selection depends on a variety of factors, mainly the size of the sternal defect, infection, and the availability of local tissues. Free fibular flaps are commonly harvested flaps utilized in reconstruction around the body. In cases regarding sternal reconstruction with free fibular flaps, the literature exclusively discussed the flap applied vertically to the chest wall. We present a different technique applying the free fibular triple barrel flap oriented in a transverse manner, in parallel to the ribs. In our experience, this method could have enhanced results and improved prognosis as it contributes to the normal circumferential shape of the chest wall.Keywords: sternal dehiscence, management, free fibular flaps, novel surgical techniques
Procedia PDF Downloads 942387 Effect of Simulation on Anxiety and Knowledge among Novice Nursing Students
Authors: Suja Karkada, Jayanthi Radhakrishnan, Jansi Natarajan, Gerald, Amandu Matua, Sujatha Shanmugasundaram
Abstract:
Simulation-based learning is an educational strategy designed to simulate actual clinical situations in a safe environment. Globally, simulation is recognized by several landmark studies as an effective teaching-learning method. A systematic review of the literature on simulation revealed simulation as a useful strategy in creating a learning environment which contributes to knowledge, skills, safety, and confidence. However, to the best of the author's knowledge, there are no studies on assessing the anxiety of the students undergoing simulation. Hence the researchers undertook a study with the aim to evaluate the effectiveness of simulation on anxiety and knowledge among novice nursing students. This quasi-experimental study had a total sample of 69 students (35- Intervention group with simulation and 34- Control group with case scenario) consisting of all the students enrolled in the Fundamentals of Nursing Laboratory course during Spring 2016 and Fall 2016 semesters at a college of nursing in Oman. Ethical clearance was obtained from the Institutional Review Board (IRB) of the college of nursing. Informed consent was obtained from every participant. Study received the Dean’s fund for research. The data were collected regarding the demographic information, knowledge and anxiety levels before and after the use of simulation and case scenario for the procedure nasogastric tube feeding in intervention and control group respectively. The intervention was performed by four faculties who were the core team members of the course. Results were analyzed in SPSS using descriptive and inferential statistics. Majority of the students’ in intervention (82.9%) and control (89.9%) groups were equal to or below the age of 20 years, were females (71%), 76.8% of them were from rural areas and 65.2% had a GPA of more than 2.5. The selection of the samples to either the experimental or the control group was from a homogenous population (p > 0.05). There was a significant reduction of anxiety among the students of control group (t (67) = 2.418, p = 0.018) comparing to the experimental group, indicating that simulation creates anxiety among Novice nursing students. However, there was no significant difference in the mean scores of knowledge. In conclusion, the study was useful in that it will help the investigators better understand the implications of using simulation in teaching skills to novice students. Since previous studies with students indicate better knowledge acquisition; this study revealed that simulation can increase anxiety among novice students possibly it is the first time they are introduced to this method of teaching.Keywords: anxiety, knowledge, novice students, simulation
Procedia PDF Downloads 1592386 The OQAM-OFDM System Using WPT/IWPT Replaced FFT/IFFT
Authors: Alaa H. Thabet, Ehab F. Badran, Moustafa H. Aly
Abstract:
With the rapid expand of wireless digital communications, demand for wireless systems that are reliable and have a high spectral efficiency have increased too. FBMC scheme based on the OFDM/OQAM has been recognized for its good performance to achieve high data rates. Fast Fourier Transforms (FFT) has been used to produce the orthogonal sub-carriers. Due to the drawbacks of OFDM -FFT based system which are the high peak-to-average ratio (PAR) and the synchronization. In this paper, Wavelet Packet Transform (WPT) is used in the place of FFT, and show better performance.Keywords: OQAM-OFDM, wavelet packet transform, PAPR, FFT
Procedia PDF Downloads 4602385 Analysis of Delivery of Quad Play Services
Authors: Rahul Malhotra, Anurag Sharma
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: FTTH, quad play, play service, access networks, data rate
Procedia PDF Downloads 4152384 Utilization and Proximate Composition of Nile Tilapia, Common Carp and African Mudfish Polycultured in Fertilized Ponds
Authors: I. A. Yola
Abstract:
Impact of poultry dropping, cow dung and rumen content on utilization and proximate composition of Oreochromis niliticus, Clarias gariepinus and Cyprinus capio in a polyculture system were studied. The research was conducted over a period of 52 weeks. Poultry droppings (PD), cow dung (CD) and rumen content (RC) were applied at three levels 30g,60g and 120g/m2/week, 25g,50g and 100g/m2/week and 22g, 44g and 88g/m2/week treatment, respectively. The control only conventional feed with 40% CP without manure application was used. Physicochemical and biological properties measured were higher in manure pond than control. The difference was statistically significant (P < 0.05) between and within treatments with exception of temperature with a combined mean of 27.900C. The water was consistently alkaline with mean values for pH of 6.61, transparency 22.6cm, conductivity 35.00µhos/cm, dissolved oxygen 4.6 mg/l, biological oxygen demand 2.8mg/l, nitrate and phosphates 0.9mg/l and 0.35mg/l, respectively. The three fish species increase in weight with increased manure rate, with a higher value in PD treatment on C. capio record 340g, O. niloticus weighed 310g and C. gariepinus 280g over the experimental period. Fishes fed supplementary diet (control) grew bigger with highest value on C. capio (685g) O. niloticus (620g) and then C. gariepinus (526g). The differences were statistically significant (P < 0.05). The result of whole body proximate analysis indicated that various manures and rates had an irregular pattern on the protein and ash gain per 100g of fish body weight gain. The combined means for whole fish carcass protein, lipids, moisture, ash and gross energy were 11.84, 2.43, 74.63, 3.00 and 109.9 respectively. The notable exceptions were significant (p < 0.05) increases in body fat and gross energy gains in all fish species accompanied by decreases in percentages of moisture as manure rates increased. Survival percentage decreases from 80% to 70%. It is recommended to use poultry dropping as manure/feeds at the rate of 120kg/ha/week for good performances in polyculture.Keywords: organic manure, Nile tilapia, African mud fish, common carp, proximate composition
Procedia PDF Downloads 5552383 The Effect of Empathy Training Given to Midwives on Mothers’ Satisfaction with Midwives and Their Birth Perception
Authors: Songul Aktas, Turkan Pasinlioglu, Kiymet Yesilcicek Calik
Abstract:
Introduction: Emphatic approach during labor increases both quality of care and birth satisfaction of mothers. Besides; maternal satisfaction statements and expressions about midwives who assist labor contribute to a positive birth perception and wish to give vaginal delivery again. Aim: The study aimed at investigating the effect of empathy training given to midwives on mothers’ satisfaction with midwives and their birth perception. Material/Method: This experimental study was undertaken between February 2013 and January 2014 at a public hospital in Trabzon Province. The population of the study was composed of mothers who gave vaginal delivery and the sample was composed of 222 mothers determined with power analyzes. Ethical approval and written informed consents were obtained. Mothers who were assisted by midwives during 1st, 2nd and 3rd phases of delivery and first two postpartum hours were included. Empathy training given to midwives included didactic narration, creative drama, psychodrama techniques and lasted 32 hours. The data were collected before the empathy training (BET), right after empathy training (RAET) and 8 weeks later after birth (8WLAB). Mothers were homogenous in terms of socio-demographic, obstetric characteristics. Data were collected with a questionnaire and were analyzed with Chi-square tests. Findings: Rate of mother’s satisfaction with midwives was 36.5% in BET, 81.1% in RAET and 75.7% in 8WLAB. Key mother’s satisfaction with midwives were as follows: 27.6% of mothers told that midwives were “smiling-kind” in BET, 39.6% of them in RAET and 33.7% of them in 8WLAB; 31% of mothers told that midwives were “understanding” in BET, 38.2% of them in RAET and 33.7% of them in 8WLAB; 15.7% of mothers told that midwives were “reassuring” in BET, 44.9% of them in RAET and 39.3% of them in 8WLAB;19.5% of mothers told that midwives were “encouraging and motivating” in BET, 39.8% of them in RAET and 19.8% of mothers told that midwives were “informative” in BET, 45.6% of them in RAET and 35.1% of them in 8WLAB (p<0.05). Key mother’s dissatisfaction with midwives were as follows: 55.3% of mothers told that midwives were “poorly-informed” in BET, 17% of them in RAET and 27.7% of them in 8WLAB; 56.9% of mothers told that midwives were “poorly-listening” in BET, 17.6% of them in RAET and 25.5% of them in 8WLAB; 53.2% of mothers told that midwives were “judgmental-embarrassing” in BET, 17% of them in RAET and 29.8% of them in 8WLAB; 56.2% of mothers told that midwives had “fierce facial expressions” in BET, 15.6% of them in RAET and 28.1% of them in 8WLAB. Rates of mothers’ perception that labor was “easy” were 8.1% in BET, 21.6% in RAET and 13.5% in 8WLAB and rates of mothers’ perception that labor was “very difficult and tiring” were 41.9% in BET, 5.4% in RAET and 13.5% in 8WLAB (p<0.05). Conclusion: The effect of empathy training given to midwives upon statements that described mothers’ satisfaction with midwives and their birth perception was positive. Note: This study was financially funded by TUBİTAK project with number 113S672.Keywords: empathy training, labor perception, mother’s satisfaction with midwife, vaginal delivery
Procedia PDF Downloads 370