Search results for: Wind Energy Conversion Systems (WECS)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16754

Search results for: Wind Energy Conversion Systems (WECS)

434 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 195
433 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network

Procedia PDF Downloads 102
432 Switchable Lipids: From a Molecular Switch to a pH-Sensitive System for the Drug and Gene Delivery

Authors: Jeanne Leblond, Warren Viricel, Amira Mbarek

Abstract:

Although several products have reached the market, gene therapeutics are still in their first stages and require optimization. It is possible to improve their lacking efficiency by the use of carefully engineered vectors, able to carry the genetic material through each of the biological barriers they need to cross. In particular, getting inside the cell is a major challenge, because these hydrophilic nucleic acids have to cross the lipid-rich plasmatic and/or endosomal membrane, before being degraded into lysosomes. It takes less than one hour for newly endocytosed liposomes to reach highly acidic lysosomes, meaning that the degradation of the carried gene occurs rapidly, thus limiting the transfection efficiency. We propose to use a new pH-sensitive lipid able to change its conformation upon protonation at endosomal pH values, leading to the disruption of the lipidic bilayer and thus to the fast release of the nucleic acids into the cytosol. It is expected that this new pH-sensitive mechanism promote endosomal escape of the gene, thereby its transfection efficiency. The main challenge of this work was to design a preparation presenting fast-responding lipidic bilayer destabilization properties at endosomal pH 5 while remaining stable at blood pH value and during storage. A series of pH-sensitive lipids able to perform a conformational switch upon acidification were designed and synthesized. Liposomes containing these switchable lipids, as well as co-lipids were prepared and characterized. The liposomes were stable at 4°C and pH 7.4 for several months. Incubation with siRNA led to the full entrapment of nucleic acids as soon as the positive/negative charge ratio was superior to 2. The best liposomal formulation demonstrated a silencing efficiency up to 10% on HeLa cells, very similar to a commercial agent, with a lowest toxicity than the commercial agent. Using flow cytometry and microscopy assays, we demonstrated that drop of pH was required for the transfection efficiency, since bafilomycin blocked the transfection efficiency. Additional evidence was brought by the synthesis of a negative control lipid, which was unable to switch its conformation, and consequently exhibited no transfection ability. Mechanistic studies revealed that the uptake was mediated through endocytosis, by clathrin and caveolae pathways, as reported for previous lipid nanoparticle systems. This potent system was used for the treatment of hypercholesterolemia. The switchable lipids were able to knockdown PCSK9 expression on human hepatocytes (Huh-7). Its efficiency is currently evaluated on in vivo mice model of PCSK9 KO mice. In summary, we designed and optimized a new cationic pH-sensitive lipid for gene delivery. Its transfection efficiency is similar to the best available commercial agent, without the usually associated toxicity. The promising results lead to its use for the treatment of hypercholesterolemia on a mice model. Anticancer applications and pulmonary chronic disease are also currently investigated.

Keywords: liposomes, siRNA, pH-sensitive, molecular switch

Procedia PDF Downloads 184
431 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach

Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao

Abstract:

Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.

Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search

Procedia PDF Downloads 41
430 Foreseen the Future: Human Factors Integration in European Horizon Projects

Authors: José Manuel Palma, Paula Pereira, Margarida Tomás

Abstract:

Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).

Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0

Procedia PDF Downloads 30
429 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 349
428 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness

Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach

Abstract:

The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.

Keywords: public health law, surveillance, policy, legal, data

Procedia PDF Downloads 123
427 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India

Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari

Abstract:

The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.

Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya

Procedia PDF Downloads 43
426 Upsouth: Digitally Empowering Rangatahi (Youth) and Whaanau (Families) to Build Skills in Critical and Creative Thinking to Achieve More Active Citizenship in Aotearoa New Zealand

Authors: Ayla Hoeta

Abstract:

In a post-colonial Aotearoa New Zealand, solutions by rangatahi (youth) for rangatahi are essential as is civic participation and building economic agency in an increasingly tough economic climate. Upsouth was an online community crowdsourcing platform developed by The Southern Initiative, in collaboration with Itsnoon that provides rangatahi and whānau (family) a safe space to share lived experience, thoughts and ideas about local kaupapa (issues/topics) of importance to them. The target participants were Māori indigenous peoples and Pacifica groups, aged 14 - 21 years. In the Aotearoa New Zealand context, this participant group is not likely to engage in traditional consultation processes despite being an essential constituent in helping shape better local communities, whānau and futures. The Upsouth platform was active for two years from 2018-2019 where it completed 42 callups with 4300+ participants. The web platform collates the ideas, voices, feedback, and content of users around a callup that has been commissioned by a sponsor, such as Auckland Council, Z Energy or Auckland Transport. A callup may be about a pressing challenge in a community such as climate change, a new housing development, homelessness etc. Each callup was funded by the sponsor with Upsouths main point of difference being that participants are given koha (money donation) through digital wallets for their ideas. Depending on the quality of what participants upload, the koha varies between small micropayments and larger payments. This encouraged participants to develop creative and critical thinking - upskilling for future focussed jobs, enterprise and democratic skills while earning pocket money at the same time. Upsouth enables youth-led action and voice, and empowers them to be a part of a reciprocal and creative economy. Rangatahi are encouraged to express themselves culturally, creatively, freely and in a way they are free to choose - for example, spoken word, song, dance, video, drawings, and/or poems. This challenges and changes what is considered acceptable as community engagement feedback by the local government. Many traditional engagement platforms are not as consultative, do not accept diverse types of feedback, nor incentivise this valuable expression of feedback. Upsouth is also empowering for rangatahi, since it allows them the opportunity to express their opinions directly to the government. Upsouth gained national and international recognition for the way it engages with youth: winning the Supreme Award and the Accessibility and Transparency Award at Auckland Council’s 2018 Engagement Awards, becoming a finalist in the 2018 Digital Equity and Accessibility category of International Data Corporation’s Smart City Asia and Pacific Awards. This paper will fully contextualize the challenges of rangatahi and whānau civic engagement in Aotearoa New Zealand and then present a reflective case study of the Upsouth project, with examples from some of the callups. This is intended to form part of the Divided Cities 22 conference New Ground sub-theme as a critical reflection on a design intervention, which was conceived and implemented by the lead author to overcome the post-colonial divisions of Māori, Pacifica and minority ethnic rangatahi in Aotearoa New Zealand.

Keywords: rangatahi, youth empowerment, civic engagement, enabling, relating, digital platform, participation

Procedia PDF Downloads 42
425 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 282
424 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.

Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization

Procedia PDF Downloads 115
423 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy

Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar

Abstract:

High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.

Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate

Procedia PDF Downloads 395
422 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 76
421 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism

Authors: Lubos Rojka

Abstract:

The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.

Keywords: consciousness, free will, determinism, emergence, moral responsibility

Procedia PDF Downloads 141
420 Bioactive Substances-Loaded Water-in-Oil/Oil-in-Water Emulsions for Dietary Supplementation in the Elderly

Authors: Agnieszka Markowska-Radomska, Ewa Dluska

Abstract:

Maintaining a bioactive substances dense diet is important for the elderly, especially to prevent diseases and to support healthy ageing. Adequate bioactive substances intake can reduce the risk of developing chronic diseases (e.g. cardiovascular, osteoporosis, neurodegenerative syndromes, diseases of the oral cavity, gastrointestinal (GI) disorders, diabetes, and cancer). This can be achieved by introducing a comprehensive supplementation of components necessary for the proper functioning of the ageing body. The paper proposes the multiple emulsions of the W1/O/W2 (water-in-oil-in-water) type as carriers for effective co-encapsulation and co-delivery of bioactive substances in supplementation of the elderly. Multiple emulsions are complex structured systems ("drops in drops"). The functional structure of the W1/O/W2 emulsion enables (i) incorporation of one or more bioactive components (lipophilic and hydrophilic); (ii) enhancement of stability and bioavailability of encapsulated substances; (iii) prevention of interactions between substances, as well as with the external environment, delivery to a specific location; and (iv) release in a controlled manner. The multiple emulsions were prepared by a one-step method in the Couette-Taylor flow (CTF) contactor in a continuous manner. In general, a two-step emulsification process is used to obtain multiple emulsions. The paper contains a proposal of emulsion functionalization by introducing pH-responsive biopolymer—carboxymethylcellulose sodium salt (CMC-Na) to the external phase, which made it possible to achieve a release of components controlled by the pH of the gastrointestinal environment. The membrane phase of emulsions was soybean oil. The W1/O/W2 emulsions were evaluated for their characteristics (drops size/drop size distribution, volume packing fraction), encapsulation efficiency and stability during storage (to 30 days) at 4ºC and 25ºC. Also, the in vitro multi-substance co-release process were investigated in a simulated gastrointestinal environment (different pH and composition of release medium). Three groups of stable multiple emulsions were obtained: emulsions I with co-encapsulated vitamins B12, B6 and resveratrol; emulsions II with vitamin A and β-carotene; and emulsions III with vitamins C, E and D3. The substances were encapsulated in the appropriate emulsion phases depending on the solubility. For all emulsions, high encapsulation efficience (over 95%) and high volume packing fraction of internal droplets (0.54-0.76) were reached. In addition, due to the presence of a polymer (CMC-Na) with adhesive properties, high encapsulation stability during emulsions storage were achieved. The co-release study of encapsulated bioactive substances confirmed the possibility to modify the release profiles. It was found that the releasing process can be controlled through the composition, structure, physicochemical parameters of emulsions and pH of the release medium. The results showed that the obtained multiple emulsions might be used as potential liquid complex carriers for controlled/modified/site-specific co-delivery of bioactive substances in dietary supplementation in the elderly.

Keywords: bioactive substance co-release, co-encapsulation, elderly supplementation, multiple emulsion

Procedia PDF Downloads 176
419 Rotterdam in Transition: A Design Case for a Low-Carbon Transport Node in Lombardijen

Authors: Halina Veloso e Zarate, Manuela Triggianese

Abstract:

The urban challenges posed by rapid population growth, climate adaptation, and sustainable living have compelled Dutch cities to reimagine their built environment and transportation systems. As a pivotal contributor to CO₂ emissions, the transportation sector in the Netherlands demands innovative solutions for transitioning to low-carbon mobility. This study investigates the potential of transit oriented development (TOD) as a strategy for achieving carbon reduction and sustainable urban transformation. Focusing on the Lombardijen station area in Rotterdam, which is targeted for significant densification, this paper presents a design-oriented exploration of a low-carbon transport node. By employing a research-by-design methodology, this study delves into multifaceted factors and scales, aiming to propose future scenarios for Lombardijen. Drawing from a synthesis of existing literature, applied research, and practical insights, a robust design framework emerges. To inform this framework, governmental data concerning the built environment and material embodied carbon are harnessed. However, the restricted access to crucial datasets, such as property ownership information from the cadastre and embodied carbon data from De Nationale Milieudatabase, underscores the need for improved data accessibility, especially during the concept design phase. The findings of this research contribute fundamental insights not only to the Lombardijen case but also to TOD studies across Rotterdam's 13 nodes and similar global contexts. Spatial data related to property ownership facilitated the identification of potential densification sites, underscoring its importance for informed urban design decisions. Additionally, the paper highlights the disparity between the essential role of embodied carbon data in environmental assessments for building permits and its limited accessibility due to proprietary barriers. Although this study lays the groundwork for sustainable urbanization through TOD-based design, it acknowledges an area of future research worthy of exploration: the socio-economic dimension. Given the complex socio-economic challenges inherent in the Lombardijen area, extending beyond spatial constraints, a comprehensive approach demands integration of mobility infrastructure expansion, land-use diversification, programmatic enhancements, and climate adaptation. While the paper adopts a TOD lens, it refrains from an in-depth examination of issues concerning equity and inclusivity, opening doors for subsequent research to address these aspects crucial for holistic urban development.

Keywords: Rotterdam zuid, transport oriented development, carbon emissions, low-carbon design, cross-scale design, data-supported design

Procedia PDF Downloads 50
418 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells

Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos

Abstract:

One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.

Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst

Procedia PDF Downloads 207
417 Methods for Early Detection of Invasive Plant Species: A Case Study of Hueston Woods State Nature Preserve

Authors: Suzanne Zazycki, Bamidele Osamika, Heather Craska, Kaelyn Conaway, Reena Murphy, Stephanie Spence

Abstract:

Invasive Plant Species (IPS) are an important component of effective preservation and conservation of natural lands management. IPS are non-native plants which can aggressively encroach upon native species and pose a significant threat to the ecology, public health, and social welfare of a community. The presence of IPS in U.S. nature preserves has caused economic costs, which has estimated to exceed $26 billion a year. While different methods have been identified to control IPS, few methods have been recognized for early detection of IPS. This study examined identified methods for early detection of IPS in Hueston Woods State Nature Preserve. Mixed methods research design was adopted in this four-phased study. The first phase entailed data gathering, the phase described the characteristics and qualities of IPS and the importance of early detection (ED). The second phase explored ED methods, Geographic Information Systems (GIS) and Citizen Science were discovered as ED methods for IPS. The third phase of the study involved the creation of hotspot maps to identify likely areas for IPS growth. While the fourth phase involved testing and evaluating mobile applications that can support the efforts of citizen scientists in IPS detection. Literature reviews were conducted on IPS and ED methods, and four regional experts from ODNR and Miami University were interviewed. A questionnaire was used to gather information about ED methods used across the state. The findings revealed that geospatial methods, including Unmanned Aerial Vehicles (UAVs), Multispectral Satellites (MSS), and Normalized Difference Vegetation Index (NDVI), are not feasible for early detection of IPS, as they require GIS expertise, are still an emerging technology, and are not suitable for every habitat for the ED of IPS. Therefore, Other ED methods options were explored, which include predicting areas where IPS will grow, which can be done through monitoring areas that are like the species’ native habitat. Through literature review and interviews, IPS are known to grow in frequently disturbed areas such as along trails, shorelines, and streambanks. The research team called these areas “hotspots” and created maps of these hotspots specifically for HW NP to support and narrow the efforts of citizen scientists and staff in the ED of IPS. The results further showed that utilizing citizen scientists in the ED of IPS is feasible, especially through single day events or passive monitoring challenges. The study concluded that the creation of hotspot maps to direct the efforts of citizen scientists are effective for the early detection of IPS. Several recommendations were made, among which is the creation of hotspot maps to narrow the ED efforts as citizen scientists continues to work in the preserves and utilize citizen science volunteers to identify and record emerging IPS.

Keywords: early detection, hueston woods state nature preserve, invasive plant species, hotspots

Procedia PDF Downloads 71
416 A Stepped Care mHealth-Based Approach for Obesity with Type 2 Diabetes in Clinical Health Psychology

Authors: Gianluca Castelnuovo, Giada Pietrabissa, Gian Mauro Manzoni, Margherita Novelli, Emanuele Maria Giusti, Roberto Cattivelli, Enrico Molinari

Abstract:

Diabesity could be defined as a new global epidemic of obesity and being overweight with many complications and chronic conditions. Such conditions include not only type 2 diabetes, but also cardiovascular diseases, hypertension, dyslipidemia, hypercholesterolemia, cancer, and various psychosocial and psychopathological disorders. The financial direct and indirect burden (considering also the clinical resources involved and the loss of productivity) is a real challenge in many Western health-care systems. Recently the Lancet journal defined diabetes as a 21st-century challenge. In order to promote patient compliance in diabesity treatment reducing costs, evidence-based interventions to improve weight-loss, maintain a healthy weight, and reduce related comorbidities combine different treatment approaches: dietetic, nutritional, physical, behavioral, psychological, and, in some situations, pharmacological and surgical. Moreover, new technologies can provide useful solutions in this multidisciplinary approach, above all in maintaining long-term compliance and adherence in order to ensure clinical efficacy. Psychological therapies with diet and exercise plans could better help patients in achieving weight loss outcomes, both inside hospitals and clinical centers and during out-patient follow-up sessions. In the management of chronic diseases clinical psychology play a key role due to the need of working on psychological conditions of patients, their families and their caregivers. mHealth approach could overcome limitations linked with the traditional, restricted and highly expensive in-patient treatment of many chronic pathologies: one of the best up-to-date application is the management of obesity with type 2 diabetes, where mHealth solutions can provide remote opportunities for enhancing weight reduction and reducing complications from clinical, organizational and economic perspectives. A stepped care mHealth-based approach is an interesting perspective in chronic care management of obesity with type 2 diabetes. One promising future direction could be treating obesity, considered as a chronic multifactorial disease, using a stepped-care approach: -mhealth or traditional based lifestyle psychoeducational and nutritional approach. -health professionals-driven multidisciplinary protocols tailored for each patient. -inpatient approach with the inclusion of drug therapies and other multidisciplinary treatments. -bariatric surgery with psychological and medical follow-up In the chronic care management of globesity mhealth solutions cannot substitute traditional approaches, but they can supplement some steps in clinical psychology and medicine both for obesity prevention and for weight loss management.

Keywords: clinical health psychology, mhealth, obesity, type 2 diabetes, stepped care, chronic care management

Procedia PDF Downloads 318
415 Study on Electromagnetic Plasma Acceleration Using Rotating Magnetic Field Scheme

Authors: Takeru Furuawa, Kohei Takizawa, Daisuke Kuwahara, Shunjiro Shinohara

Abstract:

In the field of a space propulsion, an electric propulsion system has been developed because its fuel efficiency is much higher than a conventional chemical one. However, the practical electric propulsion systems, e.g., an ion engine, have a problem of short lifetime due to a damage of generation and acceleration electrodes of the plasma. A helicon plasma thruster is proposed as a long-lifetime electric thruster which has non-direct contact electrodes. In this system, both generation and acceleration methods of a dense plasma are executed by antennas from the outside of a discharge tube. Development of the helicon plasma thruster has been conducting under the Helicon Electrodeless Advanced Thruster (HEAT) project. Our helicon plasma thruster has two important processes. First, we generate a dense source plasma using a helicon wave with an excitation frequency between an ion and an electron cyclotron frequencies, fci and fce, respectively, applied from the outside of a discharge using a radio frequency (RF) antenna. The helicon plasma source can provide a high-density (~1019 m-3), a high-ionization ratio (up to several tens of percent), and a high particle generation efficiency. Second, in order to achieve high thrust and specific impulse, we accelerate the dense plasma by the axial Lorentz force fz using the product of the induced azimuthal current jθ and the static radial magnetic field Br, shown as fz = jθ × Br. The HEAT project has proposed several kinds of electrodeless acceleration schemes, and in our particular case, a Rotating Magnetic Field (RMF) method has been extensively studied. The RMF scheme was originally developed as a concept to maintain the Field Reversed Configuration (FRC) in a magnetically confined fusion research. Here, RMF coils are expected to generate jθ due to a nonlinear effect shown below. First, the rotating magnetic field Bω is generated by two pairs of RMF coils with AC currents, which have a phase difference of 90 degrees between the pairs. Due to the Faraday’s law, an axial electric field is induced. Second, an axial current is generated by the effects of an electron-ion and an electron-neutral collisions through the Ohm’s law. Third, the azimuthal electric field is generated by the nonlinear term, and the retarding torque generated by the collision effects again. Then, azimuthal current jθ is generated as jθ = - nₑ er ∙ 2π fRMF. Finally, the axial Lorentz force fz for plasma acceleration is generated. Here, jθ is proportional to nₑ and frequency of RMF coil current fRMF, when Bω is fully penetrated into the plasma. Our previous study has achieved 19 % increase of ion velocity using the 5 MHz and 50 A of the RMF coil power supply. In this presentation, we will show the improvement of the ion velocity using the lower frequency and higher current supplied by RMF power supply. In conclusion, helicon high-density plasma production and electromagnetic acceleration by the RMF scheme with a concept of electrodeless condition have been successfully executed.

Keywords: electric propulsion, electrodeless thruster, helicon plasma, rotating magnetic field

Procedia PDF Downloads 238
414 Testing of Infill Walls with Joint Reinforcement Subjected to in Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frame subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed-joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. A confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame and the aspect ratio of the wall. All cases included tie-columns and tie-beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frame with identical characteristic to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls: this type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, Infill wall, Infilled frame, masonry wall

Procedia PDF Downloads 57
413 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 184
412 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market

Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye

Abstract:

The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy,  with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.

Keywords: case studies, dynamic capability, emerging market, subsidiary

Procedia PDF Downloads 100
411 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution

Authors: M. Arun, A. Kannan

Abstract:

Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.

Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design

Procedia PDF Downloads 153
410 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical

Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani

Abstract:

Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.

Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality

Procedia PDF Downloads 326
409 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit

Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar

Abstract:

Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.

Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction

Procedia PDF Downloads 76
408 Predictors of Motor and Cognitive Domains of Functional Performance after Rehabilitation of Individuals with Acute Stroke

Authors: A. F. Jaber, E. Dean, M. Liu, J. He, D. Sabata, J. Radel

Abstract:

Background: Stroke is a serious health care concern and a major cause of disability in the United States. This condition impacts the individual’s functional ability to perform daily activities. Predicting functional performance of people with stroke assists health care professionals in optimizing the delivery of health services to the affected individuals. The purpose of this study was to identify significant predictors of Motor FIM and of Cognitive FIM subscores among individuals with stroke after discharge from inpatient rehabilitation (typically 4-6 weeks after stroke onset). A second purpose is to explore the relation among personal characteristics, health status, and functional performance of daily activities within 2 weeks of stroke onset. Methods: This study used a retrospective chart review to conduct a secondary analysis of data obtained from the Healthcare Enterprise Repository for Ontological Narration (HERON) database. The HERON database integrates de-identified clinical data from seven different regional sources including hospital electronic medical record systems of the University of Kansas Health System. The initial HERON data extract encompassed 1192 records and the final sample consisted of 207 participants who were mostly white (74%) males (55%) with a diagnosis of ischemic stroke (77%). The outcome measures collected from HERON included performance scores on the National Institute of Health Stroke Scale (NIHSS), the Glasgow Coma Scale (GCS), and the Functional Independence Measure (FIM). The data analysis plan included descriptive statistics, Pearson correlation analysis, and Stepwise regression analysis. Results: significant predictors of discharge Motor FIM subscores included age, baseline Motor FIM subscores, discharge NIHSS scores, and comorbid electrolyte disorder (R2 = 0.57, p <0.026). Significant predictors of discharge Cognitive FIM subscores were age, baseline cognitive FIM subscores, client cooperative behavior, comorbid obesity, and the total number of comorbidities (R2 = 0.67, p <0.020). Functional performance on admission was significantly associated with age (p < 0.01), stroke severity (p < 0.01), and length of hospital stay (p < 0.05). Conclusions: our findings show that younger age, good motor and cognitive abilities on admission, mild stroke severity, fewer comorbidities, and positive client attitude all predict favorable functional outcomes after inpatient stroke rehabilitation. This study provides health care professionals with evidence to evaluate predictors of favorable functional outcomes early at stroke rehabilitation, to tailor individualized interventions based on their client’s anticipated prognosis, and to educate clients about the benefits of making lifestyle changes to improve their anticipated rate of functional recovery.

Keywords: functional performance, predictors, stroke, recovery

Procedia PDF Downloads 123
407 Assessing Mycotoxin Exposure from Processed Cereal-Based Foods for Children

Authors: Soraia V. M. de Sá, Miguel A. Faria, José O. Fernandes, Sara C. Cunha

Abstract:

Cereals play a vital role in fulfilling the nutritional needs of children, supplying essential nutrients crucial for their growth and development. However, concerns arise due to children's heightened vulnerability due to their unique physiology, specific dietary requirements, and relatively higher intake in relation to their body weight. This vulnerability exposes them to harmful food contaminants, particularly mycotoxins, prevalent in cereals. Because of the thermal stability of mycotoxins, conventional industrial food processing often falls short of eliminating them. Children, especially those aged 4 months to 12 years, frequently encounter mycotoxins through the consumption of specialized food products, such as instant foods, breakfast cereals, bars, cookie snacks, fruit puree, and various dairy items. A close monitoring of this demographic group's exposure to mycotoxins is essential, as toxins ingestion may weaken children’s immune systems, reduce their resistance to infectious diseases, and potentially lead to cognitive impairments. The severe toxicity of mycotoxins, some of which are classified as carcinogenic, has spurred the establishment and ongoing revision of legislative limits on mycotoxin levels in food and feed globally. While EU Commission Regulation 1881/2006 addresses well-known mycotoxins in processed cereal-based foods and infant foods, the absence of regulations specifically addressing emerging mycotoxins underscores a glaring gap in the regulatory framework, necessitating immediate attention. Emerging mycotoxins have gained mounting scrutiny in recent years due to their pervasive presence in various foodstuffs, notably cereals and cereal-based products. Alarmingly, exposure to multiple mycotoxins is hypothesized to exhibit higher toxicity than isolated effects, raising particular concerns for products primarily aimed at children. This study scrutinizes the presence of 22 mycotoxins of the diverse range of chemical classes in 148 processed cereal-based foods, including 39 breakfast cereals, 25 infant formulas, 27 snacks, 25 cereal bars, and 32 cookies commercially available in Portugal. The analytical approach employed a modified QuEChERS procedure followed by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis. Given the paucity of information on the risk assessment of children to multiple mycotoxins in cereal and cereal-based products consumed by children of Portugal pioneers the evaluation of this critical aspect. Overall, aflatoxin B1 (AFB1) and aflatoxin G2 (AFG2) emerged as the most prevalent regulated mycotoxins, while enniatin B (ENNB) and sterigmatocystin (STG) were the most frequently detected emerging mycotoxins.

Keywords: cereal-based products, children´s nutrition, food safety, UPLC-MS/MS analysis

Procedia PDF Downloads 35
406 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers

Authors: Navah Z. Ratzon, Rachel Shichrur

Abstract:

Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.

Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention

Procedia PDF Downloads 319
405 A Practical Construction Technique to Enhance the Performance of Rock Bolts in Tunnels

Authors: Ojas Chaudhari, Ali Nejad Ghafar, Giedrius Zirgulis, Marjan Mousavi, Tommy Ellison, Sandra Pousette, Patrick Fontana

Abstract:

In Swedish tunnel construction, a critical issue that has been repeatedly acknowledged is corrosion and, consequently, failure of the rock bolts in rock support systems. The defective installation of rock bolts results in the formation of cavities in the cement mortar that is regularly used to fill the area under the dome plates. These voids allow for water-ingress to the rock bolt assembly, which results in corrosion of rock bolt components and eventually failure. In addition, the current installation technique consists of several manual steps with intense labor works that are usually done in uncomfortable and exhausting conditions, e.g., under the roof of the tunnels. Such intense tasks also lead to a considerable waste of materials and execution errors. Moreover, adequate quality control of the execution is hardly possible with the current technique. To overcome these issues, a non-shrinking/expansive cement-based mortar filled in the paper packaging has been developed in this study which properly fills the area under the dome plates without or with the least remaining cavities, ultimately that diminishes the potential of corrosion. This article summarizes the development process and the experimental evaluation of this technique for the installation of rock bolts. In the development process, the cementitious mortar was first developed using specific cement and shrinkage reducing/expansive additives. The mechanical and flow properties of the mortar were then evaluated using compressive strength, density, and slump flow measurement methods. In addition, isothermal calorimetry and shrinkage/expansion measurements were used to elucidate the hydration and durability attributes of the mortar. After obtaining the desired properties in both fresh and hardened conditions, the developed dry mortar was filled in specific permeable paper packaging and then submerged in water bath for specific intervals before the installation. The tests were enhanced progressively by optimizing different parameters such as shape and size of the packaging, characteristics of the paper used, immersion time in water and even some minor characteristics of the mortar. Finally, the developed prototype was tested in a lab-scale rock bolt assembly with various angles to analyze the efficiency of the method in real life scenario. The results showed that the new technique improves the performance of the rock bolts by reducing the material wastage, improving environmental performance, facilitating and accelerating the labor works, and finally enhancing the durability of the whole system. Accordingly, this approach provides an efficient alternative for the traditional way of tunnel bolt installation with considerable advantages for the Swedish tunneling industry.

Keywords: corrosion, durability, mortar, rock bolt

Procedia PDF Downloads 89