Search results for: external flow
544 Social Licence to Operate Methodology to Secure Commercial, Community and Regulatory Approval for Small and Large Scale Fisheries
Authors: Kelly S. Parkinson, Katherine Y. Teh-White
Abstract:
Futureye has a bespoke social licence to operate methodology which has successfully secured community approval and commercial return for fisheries which have faced regulatory and financial risk. This unique approach to fisheries management focuses on delivering improved social and environmental outcomes to support the fishing industry make steps towards achieving the United Nations SDGs. An SLO is the community’s implicit consent for a business or project to exist. An SLO must be earned and maintained alongside regulatory licences. In current and new operations, it helps you to anticipate and measure community concerns around your operations – leading to more predictable and sensible policy outcomes that will not jeopardise your commercial returns. Rising societal expectations and increasing activist sophistication mean the international fishing industry needs to resolve community concerns at each stage their supply chain. Futureye applied our tested social licence to operate (SLO) methodology to help Austral Fisheries who was being attacked by activists concerned about the sustainability of Patagonian Toothfish. Austral was Marine Stewardship Council certified, but pirates were making the overall catch unsustainable. Austral wanted to be carbon neutral. SLO provides a lens on the risk that helps industries and companies act before regulatory and political risk escalates. To do this assessment, we have a methodology that assesses the risk that we can then translate into a process to create a strategy. 1) Audience: we understand the drivers of change and the transmission of those drivers across all audience segments. 2) Expectation: we understand the level of social norming of changing expectations. 3) Outrage: we understand the technical and perceptual aspects of risk and the opportunities to mitigate these. 4) Inter-relationships: we understand the political, regulatory, and reputation system so that we can understand the levers of change. 5) Strategy: we understand whether the strategy will achieve a social licence through bringing the internal and external stakeholders on the journey. Futureye’s SLO methodologies helped Austral to understand risks and opportunities to enhance its resilience. Futureye reviewed the issues, assessed outrage and materiality and mapped SLO threats to the company. Austral was introduced to a new way that it could manage activism, climate action, and responsible consumption. As a result of Futureye’s work, Austral worked closely with Sea Shepherd who was campaigning against pirates illegally fishing Patagonian Toothfish as well as international governments. In 2016 Austral launched the world’s first carbon neutral fish which won Austral a thirteen percent premium for tender on the open market. In 2017, Austral received the prestigious Banksia Foundation Sustainability Leadership Award for seafood that is sustainable, healthy and carbon neutral. Austral’s position as a leader in sustainable development has opened doors for retailers all over the world. Futureye’s SLO methodology can identify the societal, political and regulatory risks facing fisheries and position them to proactively address the issues and become an industry leader in sustainability.Keywords: carbon neutral, fisheries management, risk communication, social licence to operate, sustainable development
Procedia PDF Downloads 120543 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment
Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin
Abstract:
Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).Keywords: mininet, OpenFlow, POX controller, SDN
Procedia PDF Downloads 235542 Simulation of Colombian Exchange Rate to Cover the Exchange Risk Using Financial Options Like Hedge Strategy
Authors: Natalia M. Acevedo, Luis M. Jimenez, Erick Lambis
Abstract:
Imperfections in the capital market are used to argue the relevance of the corporate risk management function. With corporate hedge, the value of the company is increased by reducing the volatility of the expected cash flow and making it possible to face a lower bankruptcy costs and financial difficulties, without sacrificing tax advantages for debt financing. With the propose to avoid exchange rate troubles over cash flows of Colombian exporting firms, this dissertation uses financial options, over exchange rate between Peso and Dollar, for realizing a financial hedge. In this study, a strategy of hedge is designed for an exporting company in Colombia with the objective of preventing fluctuations because, if the exchange rate down, the number of Colombian pesos that obtains the company by exports, is less than agreed. The exchange rate of Colombia is measured by the TRM (Representative Market Rate), representing the number of Colombian pesos for an American dollar. First, the TMR is modelled through the Geometric Brownian Motion, with this, the project price is simulated using Montecarlo simulations and finding the mean of TRM for three, six and twelve months. For financial hedging, currency options were used. The 6-month projection was covered with financial options on European-type currency with a strike price of $ 2,780.47 for each month; this value corresponds to the last value of the historical TRM. In the compensation of the options in each month, the price paid for the premium, calculated with the Black-Scholes method for currency options, was considered. Finally, with the modeling of prices and the Monte Carlo simulation, the effect of the exchange hedging with options on the exporting company was determined, this by means of the unit price estimate to which the dollars in the scenario without coverage were changed and scenario with coverage. After using the scenarios: is determinate that the TRM will have a bull trend and the exporting firm will be affected positively because they will get more pesos for each dollar. The results show that the financial options manage to reduce the exchange risk. The expected value with coverage is approximate to the expected value without coverage, but the 5% percentile with coverage is greater than without coverage. The foregoing indicates that in the worst scenarios the exporting companies will obtain better prices for the sale of the currencies if they cover.Keywords: currency hedging, futures, geometric Brownian motion, options
Procedia PDF Downloads 131541 Determination of the Vaccine Induced Immunodominant Regions of Nucleoprotein Crimean-Congo Hemorrhagic Fever Virus
Authors: Engin Berber, Nurettin Canakoglu, Ibrahim Sozdutmaz, Merve Caliskan, Shaikh Terkis Islam Pavel, Hazel Yetiskin, Aykut Ozdarendeli
Abstract:
Crimean-Congo hemorrhagic fever virus (CCHFV) is a tick-borne virus in the family Bunyaviridae, genus Nairovirus. The CCHFV genome consists of three molecules of negative-sense single-stranded RNA, each encapsulated separately. The virion particle contains viral RNA polymerase (L segment), surface glycoproteins Gn and Gc (Msegment), and a nucleocapsid protein NP (S segment). CCHF is characterized by high case mortality, occurring in Asia, Africa, the Middle East and Eastern Europe. Clinical CCHF was first recognized in Turkey in 2002. The numbers of CCHF cases have gradually increased in Turkey making the virus a public health concern. Between 2002 and 2014, more than 8000 the CCHF cases have been reported in Turkey and mortality rate is around 5%. So, Turkey is one of the countries where the epidemy has become spread to the wider geography and the biggest outbreaks of CCHF have occurred in the world. We have recently developed an inactivated cell-culture based vaccine against CCHF. We have showed that the Balb/c mice immunized with the CCHF vaccine induced the high level of neutralizing antibodies. In this study, we aimed to determine the immunodominant regions of nucleoprotein (NP) CCHFV Kelkit06 strain which stimulate T cells. For this purpose, pools of overlapping NP were used for an IFN- γ ELISPOT assay. Balb/c mice were divided into two groups for the experiment. Two groups (n = 10 each) were immunized via the intraperitoneal route with 5, or 10μg of the cell culture-based vaccine. The control group (n = 6) was mock immunized with PBS. Booster injections with the same formulation were given on days 21 and 42 after the first immunization. The higher reactivity against the CCHFV NP pools 31-40 and 80-90 was determined in the two dose groups. In order to analyze the vaccine-induced T cell responses in Balb/c mice immunized with varying doses of the vaccine, we have been also currently working on CD4+, CD8+ and CD3 + T cells by flow cytometry.Keywords: Crimean-Congo hemorrhagic fever virus, immunodominant regions of NP, T cell response, vaccine
Procedia PDF Downloads 346540 Thermodynamic Performance of a Low-Cost House Coated with Transparent Infrared Reflective Paint
Authors: Ochuko K. Overen, Edson L. Meyer
Abstract:
Uncontrolled heat transfer between the inner and outer space of low-cost housings through the thermal envelope result in indoor thermal discomfort. As a result, an excessive amount of energy is consumed for space heating and cooling. Thermo-optical properties are the ability of paints to reduce the rate of heat transfer through the thermal envelope. The aim of this study is to analyze the thermal performance of a low-cost house with its walls inner surface coated with transparent infrared reflective paint. The thermo-optical properties of the paint were analyzed using Scanning Electron Microscopy/ Energy Dispersive X-ray spectroscopy (SEM/EDX), Fourier Transform Infra-Red (FTIR) and thermal photographic technique. Meteorological indoor and ambient parameters such as; air temperature, relative humidity, solar radiation, wind speed and direction of a low-cost house in Golf-course settlement, South Africa were monitored. The monitoring period covers both winter and summer period before and after coating. The thermal performance of the coated walls was evaluated using time lag and decrement factor. The SEM image shows that the coat is transparent to light. The presence of Al as Al2O and other elements were revealed by the EDX spectrum. Before coating, the average decrement factor of the walls in summer was found to be 0.773 with a corresponding time lag of 1.3 hours. In winter, the average decrement factor and corresponding time lag were 0.467 and 1.6 hours, respectively. After coating, the average decrement factor and corresponding time lag were 0.533 and 2.3 hour, respectively in summer. In winter, an average decrement factor of 1.120 and corresponding time lag of 3 hours was observed. The findings show that the performance of the coats is influenced by the seasons. With a 74% reduction in decrement factor and 1.4 time lag increase in winter, it implies that the coatings have more ability to retain heat within the inner space of the house than preventing heat flow into the house. In conclusion, the results have shown that transparent infrared reflective paint has the ability to reduce the propagation of heat flux through building walls. Hence, it can serve as a remedy to the poor thermal performance of low-cost housings in South Africa.Keywords: energy efficiency, decrement factor, low-cost housing, paints, rural development, thermal comfort, time lag
Procedia PDF Downloads 284539 An Object-Oriented Modelica Model of the Water Level Swell during Depressurization of the Reactor Pressure Vessel of the Boiling Water Reactor
Authors: Rafal Bryk, Holger Schmidt, Thomas Mull, Ingo Ganzmann, Oliver Herbst
Abstract:
Prediction of the two-phase water mixture level during fast depressurization of the Reactor Pressure Vessel (RPV) resulting from an accident scenario is an important issue from the view point of the reactor safety. Since the level swell may influence the behavior of some passive safety systems, it has been recognized that an assumption which at the beginning may be considered as a conservative one, not necessary leads to a conservative result. This paper discusses outcomes obtained during simulations of the water dynamics and heat transfer during sudden depressurization of a vessel filled up to a certain level with liquid water under saturation conditions and with the rest of the vessel occupied by saturated steam. In case of the pressure decrease e.g. due to the main steam line break, the liquid water evaporates abruptly, being a reason thereby, of strong transients in the vessel. These transients and the sudden emergence of void in the region occupied at the beginning by liquid, cause elevation of the two-phase mixture. In this work, several models calculating the water collapse and swell levels are presented and validated against experimental data. Each of the models uses different approach to calculate void fraction. The object-oriented models were developed with the Modelica modelling language and the OpenModelica environment. The models represent the RPV of the Integral Test Facility Karlstein (INKA) – a dedicated test rig for simulation of KERENA – a new Boiling Water Reactor design of Framatome. The models are based on dynamic mass and energy equations. They are divided into several dynamic volumes in each of which, the fluid may be single-phase liquid, steam or a two-phase mixture. The heat transfer between the wall of the vessel and the fluid is taken into account. Additional heat flow rate may be applied to the first volume of the vessel in order to simulate the decay heat of the reactor core in a similar manner as it is simulated at INKA. The comparison of the simulations results against the reference data shows a good agreement.Keywords: boiling water reactor, level swell, Modelica, RPV depressurization, thermal-hydraulics
Procedia PDF Downloads 210538 Impact of α-Adrenoceptor Antagonists on Biochemical Relapse in Men Undergoing Radiotherapy for Localised Prostate Cancer
Authors: Briohny H. Spencer, Russ Chess-Williams, Catherine McDermott, Shailendra Anoopkumar-Dukie, David Christie
Abstract:
Background: Prostate cancer is the second most common cancer diagnosed in men worldwide and the most prevalent in Australian men. In 2015, it was estimated that approximately 18,000 new cases of prostate cancer were diagnosed in Australia. Currently, for localised disease, androgen depravation therapy (ADT) and radiotherapy are a major part of the curative management of prostate cancer. ADT acts to reduce the levels of circulating androgens, primarily testosterone and the locally produced androgen, dihydrotestosterone (DHT), or by preventing the subsequent activation of the androgen receptor. Thus, the growth of the cancerous cells can be reduced or ceased. Radiation techniques such as brachytherapy (radiation delivered directly to the prostate by transperineal implant) or external beam radiation therapy (exposure to a sufficient dose of radiation aimed at eradicating malignant cells) are also common techniques used in the treatment of this condition. Radiotherapy (RT) has significant limitations, including reduced effectiveness in treating malignant cells present in hypoxic microenvironments leading to radio-resistance and poor clinical outcomes and also the significant side effects for the patients. Alpha1-adrenoceptor antagonists are used for many prostate cancer patients to control lower urinary tract symptoms, due to the progression of the disease itself or may arise as an adverse effect of the radiotherapy treatment. In Australia, a significant number (not a majority) of patients receive a α1-ADR antagonist and four drugs are available including prazosin, terazosin, alfuzosin and tamsulosin. There is currently limited published data on the effects of α1-ADR antagonists during radiotherapy, but it suggests these medications may improve patient outcomes by enhancing the effect of radiotherapy. Aim: To determine the impact of α1-ADR antagonists treatments on time to biochemical relapse following radiotherapy. Methods: A retrospective study of male patients receiving radiotherapy for biopsy-proven localised prostate cancer was undertaken to compare cancer outcomes for drug-naïve patients and those receiving α1-ADR antagonist treatments. Ethical approval for the collection of data at Genesis CancerCare QLD was obtained and biochemical relapse (defined by a PSA rise of >2ng/mL above the nadir) was recorded in months. Rates of biochemical relapse, prostate specific antigen doubling time (PSADT) and Kaplan-Meier survival curves were also compared. Treatment groups were those receiving α1-ADR antagonists treatment before or concurrent with their radiotherapy. Data was statistically analysed using One-way ANOVA and results expressed as mean ± standard deviation. Major findings: The mean time to biochemical relapse for tamsulosin, prazosin, alfuzosin and controls were 45.3±17.4 (n=36), 41.5±19.6 (n=11), 29.3±6.02 (n=6) and 36.5±17.6 (n=16) months respectively. Tamsulosin, prazosin but not alfuzosin delayed time to biochemical relapse although the differences were not statistically significant. Conclusion: Preliminary data for the prior and/or concurrent use of tamsulosin and prazosin showed a positive trend in delaying time to biochemical relapse although no statistical significance was shown. Larger clinical studies are indicated and with thousands of patient records yet to be analysed, it may determine if there is a significant effect of these drugs on control of prostate cancer.Keywords: alpha1-adrenoceptor antagonists, biochemical relapse, prostate cancer, radiotherapy
Procedia PDF Downloads 374537 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis
Procedia PDF Downloads 247536 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis
Authors: Sara Segura, Diego Nuñez, Miryam Villamil
Abstract:
In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis
Procedia PDF Downloads 295535 Crack Size and Moisture Issues in Thermally Modified vs. Native Norway Spruce Window Frames: A Hygrothermal Simulation Study
Authors: Gregor Vidmar, Rožle Repič, Boštjan Lesar, Miha Humar
Abstract:
The study investigates the impact of cracks in surface coatings on moisture content (MC) and related fungal growth in window frames made of thermally modified (TM) and native Norway spruce using hygrothermal simulations for Ljubljana, Slovenia. Comprehensive validation against field test data confirmed the numerical model's predictions, demonstrating similar trends in MC changes over the investigated four years. Various established mould growth models (isopleth, VTT, bio hygrothermal) did not appropriately reflect differences between the spruce types because they do not consider material moisture content, leading to the main conclusion that TM spruce is more resistant to moisture-related issues. Wood's MC influences fungal decomposition, typically occurring above 25% - 30% MC, with some fungi growing at lower MC under conducive conditions. Surface coatings cannot wholly prevent water penetration, which becomes significant when the coating is damaged. This study investigates the detrimental effects of surface coating cracks on wood moisture absorption, comparing TM spruce and native spruce window frames. Simulations were conducted for undamaged and damaged coatings (from 1 mm to 9 mm wide cracks) on window profiles as well as for uncoated profiles. Sorption curves were also measured up to 95% of the relative humidity. MC was measured in the frames exposed to actual climatic conditions and compared to simulated data for model validation. The study utilizes a simplified model of the bottom frame part due to convergence issues with simulations of the whole frame. TM spruce showed about 4% lower MC content compared to native spruce. Simulations showed that a 3 mm wide crack in native spruce coatings for the north orientation poses significant moisture risks, while a 9 mm wide crack in TM spruce coatings remains acceptable furthermore in the case of uncoated TM spruce could be acceptable. In addition, it seems that large enough cracks may cause even worse moisture dynamics compared to uncoated native spruce profiles. The absorption curve comes out to be the far most influential parameter, and the next one is density. Existing mould growth models need to be upgraded to reflect wood material differences accurately. Due to the lower sorption curve of TM spruce, in reality, higher RH values are obtained under the same boundary conditions, which implies a more critical situation according to these mould growth models. Still, it does not reflect the difference in materials, especially under external exposure conditions. Even if different substrate categories in the isopleth and bio-hygrothermal model or different sensitivity material classes for standard and TM wood are used, it does not necessarily change the expected trends; thus, models with MC being the inherent part of the models should be introduced. Orientation plays a crucial role in moisture dynamics. Results show that for similar moisture dynamics, for Norway spruce, the crack could be about 2 mm wider on the south than on the north side. In contrast, for TM spruce, orientation isn't as important, compared to other material properties. The study confirms the enhanced suitability of TM spruce for window frames in terms of moisture resistance and crack tolerance in surface coatings.Keywords: hygrothermal simulations, mould growth, surface coating, thermally modified wood, window frame
Procedia PDF Downloads 36534 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 276533 Access to Inclusive and Culturally Sensitive Mental Healthcare in Pharmacy Students and Residents
Authors: Esha Thakkar, Ina Liu, Kalynn Hosea, Shana Katz, Katie Marks, Sarah Hall, Cat Liu, Suzanne Harris
Abstract:
Purpose: Inequities in mental healthcare accessibility are cited as an international public health concern by the World Health Organization (WHO) and National Alliance on Mental Illness (NAMI). These disparities are further exacerbated in racial and ethnic minority groups and are especially concerning in health professional training settings such as Doctor of Pharmacy (PharmD) programs and postgraduate residency training where mental illness rates are high. The purpose of the study was to determine baseline access to culturally sensitive mental healthcare and how to improve such access and communication for racially and ethnically minoritized pharmacy students and residents at one school of pharmacy and a partnering academic medical center in the United States. Methods: This IRB-exempt study included 60-minute focus groups conducted in person or online from November 2021 to February 2022. Eligible participants included PharmD students in their first (P1), second (P2), third (P3), or fourth year (P4) or pharmacy residents completing a postgraduate year 1 (PGY1) or PGY2 who identify as Black, Indigenous, or Person of Color (BIPOC). There were four core theme questions asked during the focus groups to lead the discussion, specifically on the core themes of personal barriers, identities, areas that are working well, and areas for improvement. Participant responses were transcribed and analyzed using an open coding system with two individual reviews, followed by collaborative and intentional discussion and, as needed, an external audit of the coding by a third research team member to reach a consensus on themes. Results: This study enrolled 26 participants, with eight P1, five P2, seven P3, two P4, and four resident participants. Within the four core themes of barriers, identities, areas working well, and areas for improvement, emerging subthemes included: lack of time, access to resources, and stigma under barriers; lack of representation, cultural and family stigma, and gender identities for identity barriers; supportive faculty, sense of community and culture supporting paid time off for areas going well; and wellness days, reduced workload and diversity of the workforce in areas of improvement. Subthemes sometimes varied within a core theme depending on the participant year. Conclusions: There is a gap in the literature in addressing barriers and disparities in mental health access for pharmacy trainees who identify as BIPOC. We identified key findings in regards to barriers, identities, areas going well and areas for improvement that can inform the School and the Residency Program in two priority initiatives of well-being and diversity equity and inclusion in creating actionable recommendations for trainees, program directors, and employers of our institutions, and also has the potential to provide insight for other organizations about the structures influencing access to culturally sensitive care in BIPOC trainees. These findings can inform organizations on how to continue building on communication with those who identify as BIPOC and improve access to care.Keywords: mental health, disparities, minorities, wellbeing, identity, communication, barriers
Procedia PDF Downloads 92532 Production Factor Coefficients Transition through the Lens of State Space Model
Authors: Kanokwan Chancharoenchai
Abstract:
Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.Keywords: autoregressive model, economic growth, state space model, Thailand
Procedia PDF Downloads 151531 Prototype of Over Dimension Over Loading (ODOL) Freight Transportation Monitoring System Based on Arduino Mega 'Sabrang': A Case Study in Klaten, Indonesia
Authors: Chairul Fajar, Muhammad Nur Hidayat, Muksalmina
Abstract:
The issue of Over Dimension Over Loading (ODOL) in Indonesia remains a significant challenge, causing traffic accidents, disrupting traffic flow, accelerating road damage, and potentially leading to bridge collapses. Klaten Regency, located on the slopes of Mount Merapi along the Woro River in Kemalang District, has potential Class C excavation materials such as sand and stone. Data from the Klaten Regency Transportation Department indicates that ODOL violations account for 72%, while non-violating vehicles make up only 28%. ODOL involves modifying factory-standard vehicles beyond the limits specified in the Type Test Registration Certificate (SRUT) to save costs and travel time. This study aims to develop a prototype ‘Sabrang’ monitoring system based on Arduino Mega to control and monitor ODOL freight transportation in the mining of Class C excavation materials in Klaten Regency. The prototype is designed to automatically measure the dimensions and weight of objects using a microcontroller. The data analysis techniques used in this study include the Normality Test and Paired T-Test, comparing sensor measurement results on scaled objects. The study results indicate differences in measurement validation under room temperature and ambient temperature conditions. Measurements at room temperature showed that the majority of H0 was accepted, meaning there was no significant difference in measurements when the prototype tool was used. Conversely, measurements at ambient temperature showed that the majority of H0 was rejected, indicating a significant difference in measurements when the prototype tool was used. In conclusion, the ‘Sabrang’ monitoring system prototype is effective for controlling ODOL, although measurement results are influenced by temperature conditions. This study is expected to assist in the monitoring and control of ODOL, thereby enhancing traffic safety and road infrastructure.Keywords: over dimension over loading, prototype, microcontroller, Arduino, normality test, paired t-test
Procedia PDF Downloads 35530 Evaluating and Supporting Student Engagement in Online Learning
Authors: Maria Hopkins
Abstract:
Research on student engagement is founded on a desire to improve the quality of online instruction in both course design and delivery. A high level of student engagement is associated with a wide range of educational practices including purposeful student-faculty contact, peer to peer contact, active and collaborative learning, and positive factors such as student satisfaction, persistence, achievement, and learning. By encouraging student engagement, institutions of higher education can have a positive impact on student success that leads to retention and degree completion. The current research presents the results of an online student engagement survey which support faculty teaching practices to maximize the learning experience for online students. The ‘Indicators of Engaged Learning Online’ provide a framework that measures level of student engagement. Social constructivism and collaborative learning form the theoretical basis of the framework. Social constructivist pedagogy acknowledges the social nature of knowledge and its creation in the minds of individual learners. Some important themes that flow from social constructivism involve the importance of collaboration among instructors and students, active learning vs passive consumption of information, a learning environment that is learner and learning centered, which promotes multiple perspectives, and the use of social tools in the online environment to construct knowledge. The results of the survey indicated themes that emphasized the importance of: Interaction among peers and faculty (collaboration); Timely feedback on assignment/assessments; Faculty participation and visibility; Relevance and real-world application (in terms of assignments, activities, and assessments); and Motivation/interest (the need for faculty to motivate students especially those that may not have an interest in the coursework per se). The qualitative aspect of this student engagement study revealed what instructors did well that made students feel engaged in the course, but also what instructors did not do well, which could inform recommendations to faculty when expectations for teaching a course are reviewed. Furthermore, this research provides evidence for the connection between higher student engagement and persistence and retention in online programs, which supports our rationale for encouraging student engagement, especially in the online environment because attrition rates are higher than in the face-to-face environment.Keywords: instructional design, learning effectiveness, online learning, student engagement
Procedia PDF Downloads 290529 Limbic Involvement in Visual Processing
Authors: Deborah Zelinsky
Abstract:
The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing
Procedia PDF Downloads 85528 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135527 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study
Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane
Abstract:
Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms
Procedia PDF Downloads 127526 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan
Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin
Abstract:
Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.Keywords: energy, stream network, basins, SWAT, evapotranspiration
Procedia PDF Downloads 221525 Design of a Plant to Produce 100,000 MTPY of Green Hydrogen from Brine
Authors: Abdulrazak Jinadu Otaru, Ahmed Almulhim, Hassan Alhassan, Mohammed Sabri
Abstract:
Saudi Arabia is host to a state-owned oil and gas corporation, known as Saudi ARAMCO, that is responsible for the highest emissions of carbon dioxide (CO₂) due to the heavy reliance on fossil fuels as an energy source for various sectors such as transportation, aerospace, manufacturing, and residential use. Unfortunately, the detrimental consequences of CO₂ emissions include escalating temperatures in the Middle East region, posing significant obstacles in terms of food security and water scarcity for the Kingdom of Saudi Arabia. As part of the Saudi Vision 2030 initiative, which aims to reduce the country's reliance on fossil fuels by 50 %, this study focuses on designing a plant that will produce approximately 100,000 metric tons per year (MTPY) of green hydrogen (H₂) using brine as the primary feedstock. The proposed facility incorporates a double electrolytic technology that first separates brine or sodium chloride (NaCl) into sodium hydroxide, hydrogen gas, and chlorine gas. The sodium hydroxide is then used as an electrolyte in the splitting of water molecules through the supply of electrical energy in a second-stage electrolyser to produce green hydrogen. The study encompasses a comprehensive analysis of process descriptions and flow diagrams, as well as materials and energy balances. It also includes equipment design and specification, cost analysis, and considerations for safety and environmental impact. The design capitalizes on the abundant brine supply, a byproduct of the world's largest desalination plant located in Al Jubail, Saudi Arabia. Additionally, the design incorporates the use of available renewable energy sources, such as solar and wind power, to power the proposed plant. This approach not only helps reduce carbon emissions but also aligns with Saudi Arabia's energy transition policy. Furthermore, it supports the United Nations Sustainable Development Goals on Sustainable Cities and Communities (Goal 11) and Climate Action (Goal 13), benefiting not only Saudi Arabia but also other countries in the Middle East.Keywords: plant design, electrolysis, brine, sodium hydroxide, chlorine gas, green hydrogen
Procedia PDF Downloads 48524 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI
Authors: Rutej R. Mehta, Michael A. Chappell
Abstract:
Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.Keywords: arterial spin labelling, dispersion, MRI, perfusion
Procedia PDF Downloads 372523 A First-Principles Molecular Dynamics Study on Li+ Solvation Structures in THF/MTHF Containing Electrolytes for Lithium Metal Batteries.
Authors: Chiu-Neng Su, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
In lithium-ion batteries (LIBs) the solid–electrolyte interphase (SEI) layer, which forms on the anode surface, plays a crucial role in stabilizing battery performance. Over the past two decades, efforts to enhance LIB electrolytes have primarily focused on refining the quality of SEI components. Despite these endeavors, several observed phenomena remain inadequately improved the SEI layer. Consequently, there has been a significant surge in research interest regarding the behavior of electrolyte solvation structures to elucidate improvements in battery performance. Thus, in this study, we aimed to explore the solvation structures of LiPF₆ in a mixture of organic solvents, tetrahydrofuran (THF) and 2-methyl-tetrahydrofuran (MTHF) using ab-initio molecular dynamics (AIMD) simulations. Our work investigated the solvation structure of electrolytes with different salt concentrations: low-concentration electrolyte (1.0M LiPF6 in 1:1v/v mixture of THF and MTHF), and high-concentration electrolyte (2.0M LiPF₆ in 1:1v/v mixture of THF and MTHF) and compared them with that of conventional electrolyte (1.0M LiPF₆ in 1:1v/v mixture of ethylene carbonate (EC) and dimethyl carbonate (DMC)). Furthermore, the reduction stability of Li+ solvation structures in these electrolyte systems are investigated. It is found that the first solvation shell of Li+ primary consists of THF. We also analyzed the molecular orbital energy levels to understand the reducing stability of these solvents. Compared with the solvation sheath of commercial electrolyte, the THF/MTHF-containing electrolytes have a higher lowest unoccupied molecular orbital (LUMO) energy level, resulting in improved reduction and interface stability. It has been shown that Li-Al alloy can significantly improve cycle life and promote the formation of a dense SEI layer. Therefore, this study aims to construct the solvation structures obtained from calculations of the pure electrolyte system on the surface of Al-Li alloy. Additionally, AIMD simulations will be conducted to investigate chemical reactions at the interface. This investigation aims to elucidate the composition of the SEI layer formed. Furthermore, Bader charges are used to determine the origin and flow of electrons, thereby revealing the sequence of reduction reactions for generating SEI layers.Keywords: lithium, aluminum, alloy, battery, solvation structure
Procedia PDF Downloads 23522 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars
Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez
Abstract:
The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars
Procedia PDF Downloads 229521 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions
Authors: Joel Niklaus, Matthias Sturmer
Abstract:
The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling
Procedia PDF Downloads 148520 Electrical Tortuosity across Electrokinetically Remediated Soils
Authors: Waddah S. Abdullah, Khaled F. Al-Omari
Abstract:
Electrokinetic remediation is one of the most influential and effective methods to decontaminate contaminated soils. Electroosmosis and electromigration are the processes of electrochemical extraction of contaminants from soils. The driving force that causes removing contaminants from soils (electroosmosis process or electromigration process) is voltage gradient. Therefore, the electric field distribution throughout the soil domain is extremely important to investigate and to determine the factors that help to establish a uniform electric field distribution in order to make the clean-up process work properly and efficiently. In this study, small-sized passive electrodes (made of graphite) were placed at predetermined locations within the soil specimen, and the voltage drop between these passive electrodes was measured in order to observe the electrical distribution throughout the tested soil specimens. The electrokinetic test was conducted on two types of soils; a sandy soil and a clayey soil. The electrical distribution throughout the soil domain was conducted with different tests properties; and the electrical field distribution was observed in three-dimensional pattern in order to establish the electrical distribution within the soil domain. The effects of density, applied voltages, and degree of saturation on the electrical distribution within the remediated soil were investigated. The distribution of the moisture content, concentration of the sodium ions, and the concentration of the calcium ions were determined and established in three-dimensional scheme. The study has shown that the electrical conductivity within soil domain depends on the moisture content and concentration of electrolytes present in the pore fluid. The distribution of the electrical field in the saturated soil was found not be affected by its density. The study has also shown that high voltage gradient leads to non-uniform electric field distribution within the electroremediated soil. Very importantly, it was found that even when the electric field distribution is uniform globally (i.e. between the passive electrodes), local non-uniformity could be established within the remediated soil mass. Cracks or air gaps formed due to temperature rise (because of electric flow in low conductivity regions) promotes electrical tortuosity. Thus, fracturing or cracking formed in the remediated soil mass causes disconnection of electric current and hence, no removal of contaminant occur within these areas.Keywords: contaminant removal, electrical tortuousity, electromigration, electroosmosis, voltage distribution
Procedia PDF Downloads 421519 Mechanical, Thermal and Biodegradable Properties of Bioplast-Spruce Green Wood Polymer Composites
Authors: A. Atli, K. Candelier, J. Alteyrac
Abstract:
Environmental and sustainability concerns push the industries to manufacture alternative materials having less environmental impact. The Wood Plastic Composites (WPCs) produced by blending the biopolymers and natural fillers permit not only to tailor the desired properties of materials but also are the solution to meet the environmental and sustainability requirements. This work presents the elaboration and characterization of the fully green WPCs prepared by blending a biopolymer, BIOPLAST® GS 2189 and spruce sawdust used as filler with different amounts. Since both components are bio-based, the resulting material is entirely environmentally friendly. The mechanical, thermal, structural properties of these WPCs were characterized by different analytical methods like tensile, flexural and impact tests, Thermogravimetric Analysis (TGA), Differential Scanning Calorimetry (DSC) and X-ray Diffraction (XRD). Their water absorption properties and resistance to the termite and fungal attacks were determined in relation with different wood filler content. The tensile and flexural moduli of WPCs increased with increasing amount of wood fillers into the biopolymer, but WPCs became more brittle compared to the neat polymer. Incorporation of spruce sawdust modified the thermal properties of polymer: The degradation, cold crystallization, and melting temperatures shifted to higher temperatures when spruce sawdust was added into polymer. The termite, fungal and water absorption resistance of WPCs decreased with increasing wood amount in WPCs, but remained in durability class 1 (durable) concerning fungal resistance and quoted 1 (attempted attack) in visual rating regarding to the termites resistance except that the WPC with the highest wood content (30 wt%) rated 2 (slight attack) indicating a long term durability. All the results showed the possibility to elaborate the easy injectable composite materials with adjustable properties by incorporation of BIOPLAST® GS 2189 and spruce sawdust. Therefore, lightweight WPCs allow both to recycle wood industry byproducts and to produce a full ecologic material.Keywords: biodegradability, color measurements, durability, mechanical properties, melt flow index, MFI, structural properties, thermal properties, wood-plastic composites, WPCs
Procedia PDF Downloads 137518 An Evaluation of the Lae City Road Network Improvement Project
Authors: Murray Matarab Konzang
Abstract:
Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.Keywords: Lae, road network, highway, vehicle traffic, planning
Procedia PDF Downloads 358517 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller
Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian
Abstract:
The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature
Procedia PDF Downloads 130516 Mobility Management for Pedestrian Accident Predictability and Mitigation Strategies Using Multiple
Authors: Oscar Norman Nekesa, Yoshitaka Kajita
Abstract:
Tom Mboya Street is a vital urban corridor within the spectrum of Nairobi city, it experiences high volumes of pedestrian and vehicular traffic. Despite past intervention measures to lessen this catastrophe, rates have remained high. This highlights significant safety concerns that need urgent attention. This study investigates the correlation and pedestrian accident predictability with significant independent variables using multiple linear regression to model to develop effective mobility management strategies for accident mitigation. The methodology involves collecting and analyzing data on pedestrian accidents and various related independent variables. Data sources include the National Transport and Safety Authority (NTSA), Kenya National Bureau of Statistics, and Nairobi City County records, covering five years. This study aims to investigate that traffic volumes (pedestrian and vehicle), Vehicular speed, human factors, illegal parking, policy issues, urban-land use, built environment, traffic signals conditions, inadequate lighting, and insufficient traffic control measures significantly have predictability with the rate of pedestrian accidents. Explanatory variables related to road design and geometry are significant in predictor models for the Tom Mboya Road link but less influential in junction along the 5 km stretch road models. The most impactful variable across all models was vehicular traffic flow. The study recommends infrastructural improvements, enhanced enforcement, and public awareness campaigns to reduce accidents and improve urban mobility. These insights can inform policy-making and urban planning to enhance pedestrian safety along the dense packed Tom Mboya Street and similar urban settings. The findings will inform evidence-based interventions to enhance pedestrian safety and improve urban mobility.Keywords: multiple linear regression, urban mobility, traffic management, Nairobi, Tom Mboya street, infrastructure conditions., pedestrian safety, correlation and prediction
Procedia PDF Downloads 27515 Total Arterial Coronary Revascularization with Aorto-Bifemoral Bipopliteal Bypass: A Case Report
Authors: Nuruddin Mohammod Zahangir, Syed Tanvir Ahmady, Firoz Ahmed, Mainul Kabir, Tamjid Mohammad Najmus Sakib Khan, Nazmul Hossain, Niaz Ahmed, Madhava Janardhan Naik
Abstract:
The management of combined Coronary Artery Disease and Peripheral Vascular Disease is a challenge and brings with it numerous clinical dilemmas.The 56 year old gentleman presented to our department with significant triple vessel disease with occluded lower end of aorta just before bifurcation and bilateral superficial femoral arteries. Operation was done on 11.03.14. The The Left Internal Mammary Artery (LIMA) and the Right Internal Mammary Artery (RIMA) were harvested in skeletonized manner. The free RIMA was then anastomosed with LIMA to make LIMA-RIMA Y. Cardio Pulmonary Bypass was then established and coronary artery bypass grafts performed. LIMA was anastomosed to the Left Anterior Descending artery. RIMA was anastomosed to Posterior Descending Artery, 1st and 2nd Obtuse Marginal arteries in a sequential manner. Abdomen was opened by midline incision. The infrarenal aorta exposed and was found to be severely diseased. A Vascular Clamp was applied infrarenally, aortotomy done and limited endarterectomy performed. An end-to-side anastomosis was done with upper end of PTFE synthetic Y-graft (14/7 mm) to the infarenal Aorta and the Clamp released. Good flow noted in both limbs of the graft. Patient was then slowly weaned off from Cardio Pulmonary Bypass without difficulty. The distal two limbs of the Y graft were passed to the groin through retroperitoneal tunnels and anastomosed end-to-side with the common femoral arteries. Saphenous vein was interposed between common femoral and popliteal arteries bilaterally through subfascial tunnels in both thigh. On 12th postoperative day he was discharged from hospital in good general condition. Follow up after 3 months of operation the patient is doing good and free of chest pain and claudication pain.Keywords: total arterial, coronary revascularization, aorto-bifemoral bypass, bifemoro-bipopliteal bypass
Procedia PDF Downloads 472