Search results for: loss of load expectation
607 Study on the Effects of Indigenous Biological Face Treatment
Authors: Saron Adisu Gezahegn
Abstract:
Commercial cosmetic has been affecting human health due to their contents and dosage composition. Chemical base cosmetics exposes users to unnecessary health problems and financial cost. Some of the cosmetics' interaction with the environment has negative impacts on health such as burning, cracking, coloring, and so on. The users are looking for a temporary service without evaluating the side effects of cosmetics that contain chemical compositions that result in irritation, burning, allergies, cracking, and the nature of the face. Every cosmetic contains a heavy metal such as lead, zinc, cadmium, silicon, and other heavy cosmetics materials. The users may expose at the end of the day to untreatable diseases like cancer. The objective of the research is to study the effects of indigenous biological face treatment without any additives like chemicals. In ancient times this thought was highly tremendous in the world but things were changing bit by bit and reached chemical base cosmetics to maintain the beauty of hair, skin, and faces. The side effects of the treatment on the face were minimum and the side effects with the interaction of the environment were almost nil. But this thought is changed and replaces the indigenous substances with chemical substances by adding additives like heavy chemical lead and cadmium in the sense of preservation, pigments, dye, and shining. Various studies indicated that cosmetics have dangerous side effects that expose users to health problems and expensive financial loss. This study focuses on a local indigenous plant called Kulkual. Kulkual is available everywhere in a study area and sustainable products can harvest to use as indigenous face treatment materials.25 men and 25 women were selected as a sample population randomly to conduct the study effectively.The plant is harvested from the guard in the productive season. The plant was exposed to the sun dry for a week. Then the peel was removed from the plant fruit and the peels were taken to a bath filled with water to soak for three days. Then the flesh of the peel was avoided from the fruit and ready to use as a face treatment. The fleshy peel was smeared on each sample for almost a week and continued for a week. The result indicated that the effects of the treatment were a positive response with minimum cost and minimum side effects due to the environment. The beauty shines, smoothness, and color are better than chemical base cosmetics. Finally, the study is recommended that all users prefer a biological method of treatment with minimum cost and minimums side effects on health with the interaction of the environment.Keywords: cosmetic, indigneous, heavymetals, toxic
Procedia PDF Downloads 105606 Ways Management of Foods Not Served to Consumers in Food Service Sector
Authors: Marzena Tomaszewska, Beata Bilska, Danuta Kolozyn-Krajewska
Abstract:
Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). 42 completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from 'always'/'every day' to 'never'). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers 'always' or 'usually' leave uneaten meals on their plates, and over 41% 'sometimes' do so. It was found additionally that food not used in food service sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%), and inedible products (fruit and vegetable peels, egg shells) (77.5%). Most frequently into the container dedicated only for food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces is allocate for animal feeds. Food waste in the food service sector still remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data pertaining to the subject. Incorrect ways of management with foods not served to consumers were observed. There is the need to develop the educational activities for employees and management in the context of food waste management in the food service sector. This publication has been developed under the contract with the National Center for Research and Development No Gospostrateg1/385753/1/NCBR/2018 for carrying out and funding of a project implemented as part of the 'The social and economic development of Poland in the conditions of globalizing markets - GOSPOSTRATEG' program entitled 'Developing a system for monitoring wasted food and an effective program to rationalize losses and reduce food wastage' (acronym PROM).Keywords: food waste, inedible products, plate waste, used deep-frying oil
Procedia PDF Downloads 119605 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 186604 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 72603 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 133602 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 296601 Magnitude of Infection and Associated factor in Open Tibial Fractures Treated Operatively at Addis Ababa Burn Emergency and Trauma Center April, 2023
Authors: Tuji Mohammed Sani
Abstract:
Back ground: An open tibial fracture is an injury where the fractured bone directly communicates with the outside environment. Due to the specific anatomical features of the tibia (limited soft tissue coverage), more than quarter of its fractures are classified as open, representing the most common open long-bone injuries. Open tibial fractures frequently cause significant bone comminution, periosteal stripping, soft tissue loss, contamination and are prone to bacterial entry with biofilm formation, which increases the risk of deep bone infection. Objective: The main objective of the study was to determine Prevalence of infection and its associated factors in surgically treated open tibial fracture in Addis Ababa Burn Emergency and Trauma (AaBET) center. Method: A facility based retrospective cross-sectional study was conducted among patient treated for open tibial fracture at AaBET center from September 2018 to September 2021. The data was collected from patient’s chart using structured data collection form, and Data was entered and analyzed using SPSS version 26. Bivariable and multiple binary logistic regression were fitted. Multicollinearity was checked among candidate variables using variance inflation factor and tolerance, which were less than 5 and greater than 0.2, respectively. Model adequacy were tested using Hosmer-Lemeshow goodness of fitness test (P=0.711). AOR at 95% CI was reported, and P-value < 0.05 was considered statistically significant. Result: This study found that 33.9% of the study participants had an infection. Initial IV antibiotic time (AOR=2.924, 95% CI:1.160- 7.370) and time of wound closure from injury (AOR=3.524, 95% CI: 1.798-6.908), injury to admission time (AOR=2.895, 95% CI: 1.402 – 5.977). and definitive fixation method (AOR=0.244, 95% CI: 0.113 – 0.4508) were the factors found to have a statistically significant association with the occurrence of infection. Conclusion: The rate of infection in open tibial fractures indicates that there is a need to improve the management of open tibial fracture treated at AaBET center. Time from injury to admission, time from injury to first debridement, wound closure time, and initial Intra Venous antibiotic time from the injury are an important factor that can be readily amended to improve the infection rate. Whether wound closed before seven days or not were more important factor associated with occurrences of infection.Keywords: infection, open tibia, fracture, magnitude
Procedia PDF Downloads 82600 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods
Authors: Shima Nabinejad, Holger Schüttrumpf
Abstract:
Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges
Procedia PDF Downloads 256599 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 135598 Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration
Authors: Damtew Samson Zerihun
Abstract:
This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.Keywords: discrete automation, hierarchical safety controller, human machine collaboration, programmable logical controller
Procedia PDF Downloads 369597 Community Engagement in Child Centered Space at Disaster Events: A Case Story of Sri Lanka
Authors: Wasantha Pushpakumara Hitihami Mudiyanselage
Abstract:
Since recent past, Sri Lanka is highly vulnerable to reoccurring climate shocks that severely impact the food security, loss of human & animal lives, destructions of human settlements, displacement of people and damaging properties. Hence, the Government of Sri Lanka has taken important steps towards strengthening legal and institutional arrangements for Disaster Risks management in the country in May 2005. Puttalam administrative district is one of the disaster prone districts in Sri Lanka which constantly face the devastating consequences of the increasing natural disasters annually. Therefore disaster risk management will be a timely intervention in the area to minimize the adverse impacts of the disasters. The few functioning Disaster Risk management networks do not take children’s specific needs and vulnerabilities during emergencies into account. The most affected children and their families were evacuated to the government schools and temples and it was observed that children were left to roaming around as their parents were busy queuing up for relief goods and other priorities. In this sense, VOICE understands that the community has vital role that has to be played in facing challenges of disaster management in the area. During and after the disaster, it was viewed that some children were having psychological disorders which could be impacted negatively to children well–being. Need of child friendly space at emergency is a must action in the area to turn away negative impact coming from the hazards. VOICE with the support of national & international communities have established safer places for the children (Child Centered Spaces – CCS) and their families at emergencies. Village religious venues and schools were selected and equipped with necessary materials to be used for the children at emergency. Materials such as tools, stationeries, play materials, which couldn’t be easily found in surrounding environment, were provided for CCS centers. Village animators, youth and elders were given comprehensive training on Disaster management and their role at CCS. They did the facilitation in keeping children without fear and stress at flooding occurred in 2015 as well as they were able to improve their skills when working with children. Flooding in 2016, the government agencies have taken service of these village animators at early stage of flooding to make all disaster-related recovery actions productively & efficiently. This mechanism is sustained at village level that can be used for disaster events.Keywords: child centered space, impacts, psychological disorders, village animators
Procedia PDF Downloads 131596 One Year Follow up of Head and Neck Paragangliomas: A Single Center Experience
Authors: Cecilia Moreira, Rita Paiva, Daniela Macedo, Leonor Ribeiro, Isabel Fernandes, Luis Costa
Abstract:
Background: Head and neck paragangliomas are a rare group of tumors with a large spectrum of clinical manifestations. The approach to evaluate and treat these lesions has evolved over the last years. Surgery was the standard for the approach of these patients, but nowadays new techniques of imaging and radiation therapy changed that paradigm. Despite advances in treating, the growth potential and clinical outcome of individual cases remain largely unpredictable. Objectives: Characterization of our institutional experience with clinical management of these tumors. Methods: This was a cross-sectional study of patients followed in our institution between 01 January and 31 December 2017 with paragangliomas of the head and neck and cranial base. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic workup, treatment modality, tumor control and recurrence, complications of treatment and hereditary status were collected and summarized. Results: A total of four female patients were followed between 01 January and 31 December 2017 in our institution. The mean age of our cohort was 53 (± 16.1) years. The primary locations were at the level of the tympanic jug (n=2, 50%) and carotid body (n=2, 50%), and only one of the tumors of the carotid body presented pulmonary metastasis at the time of diagnosis. None of the lesions were catecholamine-secreting. Two patients underwent genetic testing, with no mutations identified. The initial clinical presentation was variable highlighting the decrease of visual acuity and headache as symptoms present in all patients. In one of the cases, loss of all teeth of the lower jaw was the presenting symptomatology. Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were employed as treatment approaches according to anatomical location and resectability of lesions. As post-therapeutic sequels the persistence of tinnitus and disabling pain stands out, presenting one of the patients neuralgia of the glossopharyngeal. Currently, all patients are under regular surveillance with a median follow up of 10 months. Conclusion: Ultimately, clinical management of these tumors remains challenging owing to heterogeneity in clinical presentation, the existence of multiple treatment alternatives, and potential to cause serious detriment to critical functions and consequently interference with the quality of life of the patients.Keywords: clinical outcomes, head and neck, management, paragangliomas
Procedia PDF Downloads 144595 Possibility of Membrane Filtration to Treatment of Effluent from Digestate
Authors: Marcin Debowski, Marcin Zielinski, Magdalena Zielinska, Paulina Rusanowska
Abstract:
The problem with digestate management is one of the most important factors influencing on the development and operation of biogas plant. Turbidity and bacterial contamination negatively affect the growth of algae, which can limit the use of the effluent in the production of algae biomass on a large scale. These problems can be overcome by cultivating of algae species resistant to environmental factors, such as Chlorella sp., Scenedesmus sp., or reducing load of organic compounds to prevent bacterial contamination. The effluent requires dilution and/or purification. One of the methods of effluent treatment is the use of a membrane technology such as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO), depending on the membrane pore size and the cut off point. Membranes are a physical barrier to solids and particles larger than the size of the pores. MF membranes have the largest pores and are used to remove turbidity, suspensions, bacteria and some viruses. UF membranes remove also color, odor and organic compounds with high molecular weight. In treatment of wastewater or other waste streams, MF and UF can provide a sufficient degree of purification. NF membranes are used to remove natural organic matter from waters, water disinfection products and sulfates. RO membranes are applied to remove monovalent ions such as Na⁺ or K⁺. The effluent was used in UF for medium to cultivation of two microalgae: Chlorella sp. and Phaeodactylum tricornutum. Growth rates of Chlorella sp. and P. tricornutum were similar: 0.216 d⁻¹ and 0.200 d⁻¹ (Chlorella sp.); 0.128 d⁻¹ and 0.126 d⁻¹ (P. tricornutum), on synthetic medium and permeate from UF, respectively. The final biomass composition was also similar, regardless of the medium. Removal of nitrogen was 92% and 71% by Chlorella sp. and P. tricornutum, respectively. The fermentation effluents after UF and dilution were also used for cultivation of algae Scenedesmus sp. that is resistant to environmental conditions. The authors recommended the development of biorafinery based on the production of algae for the biogas production. There are examples of using a multi-stage membrane system to purify the liquid fraction from digestate. After the initial UF, RO is used to remove ammonium nitrogen and COD. To obtain a permeate with a concentration of ammonium nitrogen allowing to discharge it into the environment, it was necessary to apply three-stage RO. The composition of the permeate after two-stage RO was: COD 50–60 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 300–320 mg/dm³, total nitrogen 320–340 mg/dm³, total phosphorus 53 mg/dm³. However compostion of permeate after three-stage RO was: COD < 5 mg/dm³, dry solids 0 mg/dm³, ammonium nitrogen 0 mg/dm³, total nitrogen 3.5 mg/dm³, total phosphorus < 0,05 mg/dm³. Last stage of RO might be replaced by ion exchange process. The negative aspect of membrane filtration systems is the fact that the permeate is about 50% of the introduced volume, the remainder is the retentate. The management of a retentate might involve recirculation to a biogas plant.Keywords: digestate, membrane filtration, microalgae cultivation, Chlorella sp.
Procedia PDF Downloads 352594 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates
Authors: Jiro Takagi
Abstract:
Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses
Procedia PDF Downloads 226593 Drying Shrinkage of Concrete: Scale Effect and Influence of Reinforcement
Authors: Qier Wu, Issam Takla, Thomas Rougelot, Nicolas Burlion
Abstract:
In the framework of French underground disposal of intermediate level radioactive wastes, concrete is widely used as a construction material for containers and tunnels. Drying shrinkage is one of the most disadvantageous phenomena of concrete structures. Cracks generated by differential shrinkage could impair the mechanical behavior, increase the permeability of concrete and act as a preferential path for aggressive species, hence leading to an overall decrease in durability and serviceability. It is of great interest to understand the drying shrinkage phenomenon in order to predict and even to control the strains of concrete. The question is whether the results obtained from laboratory samples are in accordance with the measurements on a real structure. Another question concerns the influence of reinforcement on drying shrinkage of concrete. As part of a global project with Andra (French National Radioactive Waste Management Agency), the present study aims to experimentally investigate the scale effect as well as the influence of reinforcement on the development of drying shrinkage of two high performance concretes (based on CEM I and CEM V cements, according to European standards). Various sizes of samples are chosen, from ordinary laboratory specimens up to real-scale specimens: prismatic specimens with different volume-to-surface (V/S) ratios, thin slices (thickness of 2 mm), cylinders with different sizes (37 and 160 mm in diameter), hollow cylinders, cylindrical columns (height of 1000 mm) and square columns (320×320×1000 mm). The square columns have been manufactured with different reinforcement rates and can be considered as mini-structures, to approximate the behavior of a real voussoir from the waste disposal facility. All the samples are kept, in a first stage, at 20°C and 50% of relative humidity (initial conditions in the tunnel) in a specific climatic chamber developed by the Laboratory of Mechanics of Lille. The mass evolution and the drying shrinkage are monitored regularly. The obtained results show that the specimen size has a great impact on water loss and drying shrinkage of concrete. The specimens with a smaller V/S ratio and a smaller size have a bigger drying shrinkage. The correlation between mass variation and drying shrinkage follows the same tendency for all specimens in spite of the size difference. However, the influence of reinforcement rate on drying shrinkage is not clear based on the present results. The second stage of conservation (50°C and 30% of relative humidity) could give additional results on these influences.Keywords: concrete, drying shrinkage, mass evolution, reinforcement, scale effect
Procedia PDF Downloads 183592 Exploitation behind the Development of Home Batik Industry in Lawean, Solo, Central Java
Authors: Mukhammad Fatkhullah, Ayla Karina Budita, Cut Rizka Al Usrah, Kanita Khoirun Nisa, Muhammad Alhada Fuadilah Habib, Siti Muslihatul Mukaromah
Abstract:
Batik industry has become one of the leading industries in the economy of Indonesia. Since the recognition of batik as one of cultural wealth and national identity of Indonesia by UNESCO, batik production keeps increasing as a result of increasing demands for batik, whether from domestically or abroad. One of the rapid development batik industries in Indonesia is batik industry in Lawean Village, Solo, Central Java, Indonesia. This batik industry generally uses putting-out system where batik workers work in their own houses. With the implementation of this system, therefore employers don’t have to prepare Environmental Impact Analysis (EIA), social security for workers, overtime payment, space for working, and equipment for working. The implementation of putting-out system causes many problems, starting from environmental pollution, the loss of social rights of workers, and even exploitation of workers by batik entrepreneurs. The data used to describe this reality is the primary data from qualitative research with in-depth interview data collection technique. Informants were determined purposively. The theory used to perform data interpretation is the phenomenology of Alfred Schutz. Both qualitative and phenomenology are used in this study to describe batik workers exploitation in terms of the implementation of putting-out system on home batik industry in Lawean. The research result showed that workers in batik industry sector in Lawean were exploited with the implementation of putting-out system. The workers were strictly employed by the entrepreneurs, so that their job cannot be called 'part-time' job anymore. In terms of labor and time, the workers often work more than 12 hours per day and they often work overtime without receiving any overtime payment. In terms of work safety, the workers often have contact with chemical substances contained in batik making materials without using any protection, such as clothes work, which is worsened by the lack of standard or procedure in work that can cause physical damage, such as burnt and peeled off skin. Moreover, exposure and contamination of chemical materials make the workers and their families vulnerable to various diseases. Meanwhile, batik entrepreneurs did not give any social security (including health cost aid). Besides that, the researchers found that batik industry in home industry sector is not environmentally friendly, even damaging ecosystem because industrial waste disposed without EIA.Keywords: exploitation, home batik industry, occupational health and safety, putting-out system
Procedia PDF Downloads 316591 Corrosion Analysis of a 3-1/2” Production Tubing of an Offshore Oil and Gas Well
Authors: Suraj Makkar, Asis Isor, Jeetendra Gupta, Simran Bareja, Maushumi K. Talukdar
Abstract:
During the exploratory testing phase of an offshore oil and gas well, when the tubing string was pulled out after production testing, it was observed that there was visible corrosion/pitting in a few of the 3-1/2” API 5 CT L-80 Grade tubing. The area of corrosion was at the same location in all the tubing, i.e., just above the pin end. Since the corrosion was observed in the tubing within two months of their installation, it was a matter of concern, as it could lead to premature failures resulting in leakages and production loss and thus affecting the integrity of the asset. Therefore, the tubing was analysed to ascertain the mechanism of the corrosion occurring on its surface. During the visual inspection, it was observed that the corrosion was totally external, which was near the pin end, and no significant internal corrosion was observed. The chemical compositional analysis and mechanical properties (tensile and impact) show that the pipeline material was conforming to API 5 CT L-80 specifications. The metallographic analysis of the tubing revealed tempered martensitic microstructure. The grain size was observed to be different at the pin end as compared to the microstructure at base metal. The microstructures of the corroded area near threads reveal an oriented microstructure. The clearly oriented microstructure of the cold-worked zone near threads and the difference in microstructure represents inappropriate heat treatment after cold work. This was substantiated by hardness test results as well, which show higher hardness at the pin end in comparison to hardness at base metal. Scanning Electron Microscope (SEM) analysis revealed the presence of round and deep pits and cracks on the corroded surface of the tubing. The cracks were stress corrosion cracks in a corrosive environment arising out of the residual stress, which was not relieved after cold working, as mentioned above. Energy Dispersive Spectroscopy (EDS) analysis indicates the presence of mainly Fe₂O₃, Chlorides, Sulphides, and Silica in the corroded part indicating the interaction of the tubing with the well completion fluid and well bore environment. Thus it was concluded that residual stress after the cold working of male pins during threading and the corrosive environment acted in synergy to cause this pitting corrosion attack on the highly stressed zone along the circumference of the tubing just below the threaded area. Accordingly, the following suitable recommendations were given to avoid the recurrence of such corrosion problems in the wells. (i) After any kind of hot work/cold work, tubing should be normalized at full length to achieve uniform microstructure throughout its length. (ii) Heat treatment requirements (as per API 5 CT) should be part of technical specifications while at the procurement stage.Keywords: pin end, microstructure, grain size, stress corrosion cracks
Procedia PDF Downloads 80590 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 114589 Microglia Activation in Animal Model of Schizophrenia
Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg
Abstract:
Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia
Procedia PDF Downloads 416588 Floods Hazards and Emergency Respond in Negara Brunei Darussalam
Authors: Hj Mohd Sidek bin Hj Mohd Yusof
Abstract:
More than 1.5 billion people around the world are adversely affected by floods. Floods account for about a third of all natural catastrophes, cause more than half of all fatalities and are responsible for a third of overall economic loss around the world. Giving advanced warning of impending disasters can reduce or even avoid the number of deaths, social and economic hardships that are so commonly reported after the event. Integrated catchment management recognizes that it is not practical or viable to provide structural measures that will keep floodwater away from the community and their property. Non-structural measures are therefore required to assist the community to cope when flooding occurs which exceeds the capacity of the structural measures. Non-structural measures may need to be used to influence the way land is used or buildings are constructed, or they may be used to improve the community’s preparedness and response to flooding. The development and implementation of non-structural measures may be guided and encouraged by policy and legislation, or through voluntary action by the community based on knowledge gained from public education programs. There is a range of non-structural measures that can be used for flood hazard mitigation which can be the use measures includes policies and rules applied by government to regulate the kinds of activities that are carried out in various flood-prone areas, including minimum floor levels and the type of development approved. Voluntary actions taken by the authorities and by the community living and working on the flood plain to lessen flooding effects on themselves and their properties including monitoring land use changes, monitoring and investigating the effects of bush / forest clearing in the catchment and providing relevant flood related information to the community. Response modification measures may include: flood warning system, flood education, community awareness and readiness, evacuation arrangements and recovery plan. A Civil Defense Emergency Management needs to be established for Brunei Darussalam in order to plan, co-ordinate and undertake flood emergency management. This responsibility may be taken by the Ministry of Home Affairs, Brunei Darussalam who is already responsible for Fire Fighting and Rescue services. Several pieces of legislation and planning instruments are in place to assist flood management, particularly: flood warning system, flood education Community awareness and readiness, evacuation arrangements and recovery plan.Keywords: RTB, radio television brunei, DDMC, district disaster management center, FIR, flood incidence report, PWD, public works department
Procedia PDF Downloads 256587 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain
Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki
Abstract:
The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy
Procedia PDF Downloads 74586 An Integrated Geophysical Investigation for Earthen Dam Inspection: A Case Study of Huai Phueng Dam, Udon Thani, Northeastern Thailand
Authors: Noppadol Poomvises, Prateep Pakdeerod, Anchalee Kongsuk
Abstract:
In the middle of September 2017, a tropical storm named ‘DOKSURI’ swept through Udon Thani, Northeastern Thailand. The storm dumped heavy rain for many hours and caused large amount of water flowing into Huai Phueng reservoir. Level of impounding water increased rapidly, and the extra water flowed over a service spillway, morning-glory type constructed by concrete material for about 50 years ago. Subsequently, a sinkhole was formed on the dam crest and five points of water piping were found on downstream slope closely to spillway. Three techniques of geophysical investigation were carried out to inspect cause of failures; Electrical Resistivity Imaging (ERI), Multichannel Analysis of Surface Wave (MASW), and Ground Penetrating Radar (GPR), respectively. Result of ERI clearly shows evidence of overtop event and heterogeneity around spillway that implied possibility of previous shape of sinkhole around the pipe. The shear wave velocity of subsurface soil measured by MASW can numerically convert to undrained shear strength of impervious clay core. Result of GPR clearly reveals partial settlements of freeboard zone at top part of the dam and also shaping new refilled material to plug the sinkhole back to the condition it should be. In addition, the GPR image is a main answer to confirm that there are not any sinkholes in the survey lines, only that found on top of the spillway. Integrity interpretation of the three results together with several evidences observed during a field walk-through and data from drilled holes can be interpreted that there are four main causes in this account. The first cause is too much water flowing over the spillway. Second, the water attacking morning glory spillway creates cracks upon concrete contact where the spillway is cross-cut to the center of the dam. Third, high velocity of water inside the concrete pipe sucking fine particle of embankment material down via those cracks and flushing out to the river channel. Lastly, loss of clay material of the dam into the concrete pipe creates the sinkhole at the crest. However, in case of failure by piping, it is possible that they can be formed both by backward erosion (internal erosion along or into embedded structure of spillway walls) and also by excess saturated water of downstream material.Keywords: dam inspection, GPR, MASW, resistivity
Procedia PDF Downloads 242585 The Sustainable Design Approaches of Vernacular Architecture in Anatolia
Authors: Mine Tanaç Zeren
Abstract:
The traditional architectural style or the vernacular architecture can be considered modern and permanent in terms of reflecting the community’s lifestyle, reasonable interpretation of the material and the structure, and the building and the environment relationship’s integrity. When vernacular architecture is examined, it is seen that sustainable building design approaches are achieved at the very beginning by adapting to climate conditions. The aim of the sustainable design approach is to maintain to adapt to the characteristics of the topography of the land and to the climatic conditions, minimizing the energy use by the building material and structural elements. Traditional Turkish House, as one of the representatives of the traditional and vernacular architecture in Anatolia, has a sustainable building design approach as well, which can be read both from the space organization, the section, the volume, and the building components and building details. The only effective factor that human beings cannot change and have to adapt their constructions and settlements to is climate. The vernacular settlements of vernacular architecture in Anatolia, “Traditional Turkish Houses,” are generally formed as concentric settlements in desert conditions and climates or separate and dependently formations according to the wind and the sun in moist areas. They obtain the sustainable building design criteria. This paper aims to put forward the sustainable building design approaches of vernacular architecture in Anatolia. There are four main different climatic conditions depending on the regional differentiations in Anatolia. Taking these different climatic and topographic conditions into account, it has been seen that the vernacular housing features shape and differentiate from each other due to the changing conditions. What is differentiating is the space organization, design of the shelter of the building, material, and structural system used. In this paper, the sustainable building design approaches of Anatolian vernacular architecture will be examined within these four different vernacular settlements located in Aegean Region, Marmara Region, Black Sea Region, and Eastern Region. These differentiated features and how these features differentiate in order to maintain the sustainability criteria will be the main discussion part of the paper. The methodology of this paper will briefly define these differentiations and the sustainable design criteria. The sustainable design approaches and these differentiated items will be read through the design criteria of the shelter of the building and the material selection criteria according to climatic conditions. The methods of preventing energy loss will be examined. At the end of this research, it is going to be seen that the houses located in different parts of Anatolia, depending on climate and topographic conditions to be able to adapt to the environment and maintain sustainability, differ from each other in terms of space organization, structural system, and material use, design of the shelter of the buildingKeywords: sustainability of vernacular architecture, sustainable design criteria of traditional Turkish houses, Turkish houses, vernacular architecture
Procedia PDF Downloads 98584 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics
Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki
Abstract:
A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.Keywords: cooling, cold plate, uni-porous media, heat transfer
Procedia PDF Downloads 295583 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 79582 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material
Authors: H. M. Alfrihidi, H.A. Albarakaty
Abstract:
Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.Keywords: flattening filter free, monte carlo, radiotherapy, surface dose
Procedia PDF Downloads 73581 Structural Health Assessment of a Masonry Bridge Using Wireless
Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep
Abstract:
Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies
Procedia PDF Downloads 169580 Cartilage Mimicking Coatings to Increase the Life-Span of Bearing Surfaces in Joint Prosthesis
Authors: L. Sánchez-Abella, I. Loinaz, H-J. Grande, D. Dupin
Abstract:
Aseptic loosening remains as the principal cause of revision in total hip arthroplasty (THA). For long-term implantations, submicron particles are generated in vivo due to the inherent wear of the prosthesis. When this occurs, macrophages undergo phagocytosis and secretion of bone resorptive cytokines inducing osteolysis, hence loosening of the implanted prosthesis. Therefore, new technologies are required to reduce the wear of the bearing materials and hence increase the life-span of the prosthesis. Our strategy focuses on surface modification of the bearing materials with a hydrophilic coating based on cross-linked water-soluble (meth)acrylic monomers to improve their tribological behavior. These coatings are biocompatible, with high swelling capacity and antifouling properties, mimicking the properties of natural cartilage, i.e. wear resistance with a permanent hydrated layer that prevents prosthesis damage. Cartilage mimicking based coatings may be also used to protect medical device surfaces from damage and scratches that will compromise their integrity and hence their safety. However, there are only a few reports on the mechanical and tribological characteristics of this type of coatings. Clear beneficial advantages of this coating have been demonstrated in different conditions and different materials, such as Ultra-high molecular weight polyethylene (UHMWPE), Polyethylene (XLPE), Carbon-fiber-reinforced polyetheretherketone (CFR-PEEK), cobalt-chromium (CoCr), Stainless steel, Zirconia Toughened Alumina (ZTA) and Alumina. Using routine tribological experiments, the wear for UHMWPE substrate was decreased by 75% against alumina, ZTA and stainless steel. For PEEK-CFR substrate coated, the amount of material lost against ZTA and CrCo was at least 40% lower. Experiments on hip simulator allowed coated ZTA femoral heads and coated UHMWPE cups to be validated with a decrease of 80% of loss material. Further experiments on hip simulator adding abrasive particles (1 micron sized alumina particles) during 3 million cycles, on a total of 6 million, demonstrated a decreased of around 55% of wear compared to uncoated UHMWPE and uncoated XLPE. In conclusion, CIDETEC‘s hydrogel coating technology is versatile and can be adapted to protect a large range of surfaces, even in abrasive conditions.Keywords: cartilage, hydrogel, hydrophilic coating, joint
Procedia PDF Downloads 119579 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure
Authors: Inga Žukovaitė
Abstract:
This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model
Procedia PDF Downloads 68578 A Minimally Invasive Approach Using Bio-Miniatures Implant System for Full Arch Rehabilitation
Authors: Omid Allan
Abstract:
The advent of ultra-narrow diameter implants initially offered an alternative to wider conventional implants. However, their design limitations have restricted their applicability primarily to overdentures and cement-retained fixed prostheses, often with unpredictable long-term outcomes. The introduction of the new Miniature Implants has revolutionized the field of implant dentistry, leading to a more streamlined approach. The utilization of Miniature Implants has emerged as a promising alternative to the traditional approach that entails the traumatic sequential bone drilling procedures and the use of conventional implants for full and partial arch restorations. The innovative "BioMiniatures Implant System serves as a groundbreaking bridge connecting mini implants with standard implant systems. This system allows practitioners to harness the advantages of ultra-small implants, enabling minimally invasive insertion and facilitating the application of fixed screw-retained prostheses, which were only available to conventional wider implant systems. This approach streamlines full and partial arch rehabilitation with minimal or even no bone drilling, significantly reducing surgical risks and complications for clinicians while minimizing patient morbidity. The ultra-narrow diameter and self-advancing features of these implants eliminate the need for invasive and technically complex procedures such as bone augmentation and guided bone regeneration (GBR), particularly in cases involving thin alveolar ridges. Furthermore, the absence of a microcap between the implant and abutment eliminates the potential for micro-leakage and micro-pumping effects, effectively mitigating the risk of marginal bone loss and future peri-implantitis. The cumulative experience of restoring over 50 full and partial arch edentulous cases with this system has yielded an outstanding success rate exceeding 97%. The long-term success with a stable marginal bone level in the study firmly establishes these implants as a dependable alternative to conventional implants, especially for full arch rehabilitation cases. Full arch rehabilitation with these implants holds the promise of providing a simplified solution for edentulous patients who typically present with atrophic narrow alveolar ridges, eliminating the need for extensive GBR and bone augmentation to restore their dentition with fixed prostheses.Keywords: mini-implant, biominiatures, miniature implants, minimally invasive dentistry, full arch rehabilitation
Procedia PDF Downloads 74