Search results for: draw solution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6165

Search results for: draw solution

4815 The Learning Loops in the Public Realm Project in South Verona: Air Quality and Noise Pollution Participatory Data Collection towards Co-Design, Planning and Construction of Mitigation Measures in Urban Areas

Authors: Massimiliano Condotta, Giovanni Borga, Chiara Scanagatta

Abstract:

Urban systems are places where the various actors involved interact and enter in conflict, in particular with reference to topics such as traffic congestion and security. But topics of discussion, and often clash because of their strong complexity, are air and noise pollution. For air pollution, the complexity stems from the fact that atmospheric pollution is due to many factors, but above all, the observation and measurement of the amount of pollution of a transparent, mobile and ethereal element like air is very difficult. Often the perceived condition of the inhabitants does not coincide with the real conditions, because it is conditioned - sometimes in positive ways other in negative ways - from many other factors such as the presence, or absence, of natural elements such as trees or rivers. These problems are seen with noise pollution as well, which is also less considered as an issue even if it’s problematic just as much as air quality. Starting from these opposite positions, it is difficult to identify and implement valid, and at the same time shared, mitigation solutions for the problem of urban pollution (air and noise pollution). The LOOPER (Learning Loops in the Public Realm) project –described in this paper – wants to build and test a methodology and a platform for participatory co-design, planning, and construction process inside a learning loop process. Novelties in this approach are various; the most relevant are three. The first is that citizens participation starts since from the research of problems and air quality analysis through a participatory data collection, and that continues in all process steps (design and construction). The second is that the methodology is characterized by a learning loop process. It means that after the first cycle of (1) problems identification, (2) planning and definition of design solution and (3) construction and implementation of mitigation measures, the effectiveness of implemented solutions is measured and verified through a new participatory data collection campaign. In this way, it is possible to understand if the policies and design solution had a positive impact on the territory. As a result of the learning process produced by the first loop, it will be possible to improve the design of the mitigation measures and start the second loop with new and more effective measures. The third relevant aspect is that the citizens' participation is carried out via Urban Living Labs that involve all stakeholder of the city (citizens, public administrators, associations of all urban stakeholders,…) and that the Urban Living Labs last for all the cycling of the design, planning and construction process. The paper will describe in detail the LOOPER methodology and the technical solution adopted for the participatory data collection and design and construction phases.

Keywords: air quality, co-design, learning loops, noise pollution, urban living labs

Procedia PDF Downloads 365
4814 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 528
4813 Identifying Applicant Potential Through Admissions Testing

Authors: Belinda Brunner

Abstract:

Objectives: Communicate common test constructs of well-known higher education admissions tests. Discuss influences on admissions test construct definition and design and discuss research on related to factors influencing success in academic study. Discuss how admissions tests can be used to identify relevant talent. Examine how admissions test can be used to facilitate educational mobility and inform selection decisions when the prerequisite curricula is not standardized Observations: Generally speaking, constructs of admissions tests can be placed along a continuum from curriculum-related knowledge to more general reasoning abilities. For example, subject-specific achievement tests are more closely aligned to a prescribed curriculum, while reasoning tests are typically not associated with a specific curriculum. This session will draw reference from the test-constructs of well-known international higher education admissions tests, such as the UK clinical aptitude test (UKCAT) which is used for medicine and dentistry admissions. Conclusions: The purpose of academic admissions testing is to identify potential students with the prerequisite skills set needed to succeed in the academic environment, but how can the test construct help achieve this goal? Determination of the appropriate test construct for tests used in the admissions selection decisions should be influenced by a number of factors, including the preceding academic curricula, other criteria influencing the admissions decision, and the principal purpose for testing. Attendees of this session will learn the types of aptitudes and knowledge that are assessed higher education admissions tests and will have the opportunity to gain insight into how careful and deliberate consideration of the desired test constructs can aid in identifying potential students with the greatest likelihood of success in medical school.

Keywords: admissions, measuring success, selection, identify skills

Procedia PDF Downloads 488
4812 Research on the Aeration Systems’ Efficiency of a Lab-Scale Wastewater Treatment Plant

Authors: Oliver Marunțălu, Elena Elisabeta Manea, Lăcrămioara Diana Robescu, Mihai Necșoiu, Gheorghe Lăzăroiu, Dana Andreya Bondrea

Abstract:

In order to obtain efficient pollutants removal in small-scale wastewater treatment plants, uniform water flow has to be achieved. The experimental setup, designed for treating high-load wastewater (leachate), consists of two aerobic biological reactors and a lamellar settler. Both biological tanks were aerated by using three different types of aeration systems - perforated pipes, membrane air diffusers and tube ceramic diffusers. The possibility of homogenizing the water mass with each of the air diffusion systems was evaluated comparatively. The oxygen concentration was determined by optical sensors with data logging. The experimental data was analyzed comparatively for all three different air dispersion systems aiming to identify the oxygen concentration variation during different operational conditions. The Oxygenation Capacity was calculated for each of the three systems and used as performance and selection parameter. The global mass transfer coefficients were also evaluated as important tools in designing the aeration system. Even though using the tubular porous diffusers leads to higher oxygen concentration compared to the perforated pipe system (which provides medium-sized bubbles in the aqueous solution), it doesn’t achieve the threshold limit of 80% oxygen saturation in less than 30 minutes. The study has shown that the optimal solution for the studied configuration was the radial air diffusers which ensure an oxygen saturation of 80% in 20 minutes. An increment of the values was identified when the air flow was increased.

Keywords: flow, aeration, bioreactor, oxygen concentration

Procedia PDF Downloads 389
4811 Synthesis and Characterization of Chiral Dopant Based on Schiff's Base Structure

Authors: Hong-Min Kim, Da-Som Han, Myong-Hoon Lee

Abstract:

CLCs (Cholesteric liquid crystals) draw tremendous interest due to their potential in various applications such as cholesteric color filters in LCD devices. CLC possesses helical molecular orientation which is induced by a chiral dopant molecules mixed with nematic liquid crystals. The efficiency of a chiral dopant is quantified by the HTP (helical twisting power). In this work, we designed and synthesized a series of new chiral dopants having a Schiff’s base imine structure with different alkyl chain lengths (butyl, hexyl and octyl) from chiral naphthyl amine by two-step reaction. The structures of new chiral dopants were confirmed by 1H-NMR and IR spectroscopy. The properties were investigated by DSC (differential scanning calorimetry calorimetry), POM (polarized optical microscopy) and UV-Vis spectrophotometer. These solid state chiral dopants showed excellent solubility in nematic LC (MLC-6845-000) higher than 17wt%. We prepared the CLC(Cholesteric Liquid Crystal) cell by mixing nematic LC (MLC-6845-000) with different concentrations of chiral dopants and injecting into the sandwich cell of 5μm cell gap with antiparallel alignment. The cholesteric liquid crystal phase was confirmed from POM, in which all the samples showed planar phase, a typical phase of the cholesteric liquid crystals. The HTP (helical twisting power) is one of the most important properties of CLC. We measured the HTP values from the UV-Vis transmittance spectra of CLC cells with varies chiral dopant concentration. The HTP values with different alkyl chains are as follows: butyl chiral dopant=29.8μm-1; hexyl chiral dopant= 31.8μm-1; octyl chiral dopant=27.7μm-1. We obtained the red, green and blue reflection color from CLC cells, which can be used as color filters in LCDs applications.

Keywords: cholesteric liquid crystal, color filter, display, HTP

Procedia PDF Downloads 267
4810 Examining the Cognitive Abilities and Financial Literacy Among Street Entrepreneurs: Evidence From North-East, India

Authors: Aayushi Lyngwa, Bimal Kishore Sahoo

Abstract:

The study discusses the relationship between cognitive ability and the level of education attained by the tribal street entrepreneurs on their financial literacy. It is driven by the objective of examining the effect of cognitive ability on financial ability on the one hand and determining the effect of the same on financial literacy on the other. A field experiment was conducted on 203 tribal street vendors in the north-eastern Indian state of Mizoram. This experiment's calculations are conditioned by providing each question scores like math score (cognitive ability), financial score and debt score (financial ability). After that, categories for each of the variables, like math category (math score), financial category (financial score) and debt category (debt score), are generated to run the regression model. Since the dependent variable is ordinal, an ordered logit regression model was applied. The study shows that street vendors' cognitive and financial abilities are highly correlated. It, therefore, confirms that cognitive ability positively affects the financial literacy of street vendors through the increase in attainment of educational levels. It is also found that concerning the type of street vendors, regular street vendors are more likely to have better cognitive abilities than temporary street vendors. Additionally, street vendors with more cognitive and financial abilities gained better monthly profits and performed habits of bookkeeping. The study attempts to draw a particular focus on a set-up which is economically and socially marginalized in the Indian economy. Its finding contributes to understanding financial literacy in an understudied area and provides policy implications through inclusive financial systems solutions in an economy limited to tribal street vendors.

Keywords: financial literacy, education, street entrepreneurs, tribals, cognitive ability, financial ability, ordered logit regression.

Procedia PDF Downloads 110
4809 Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs

Authors: Verónica Díaz

Abstract:

A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.

Keywords: cartesian graphs, higher education, movement modeling, problem solving

Procedia PDF Downloads 218
4808 LCA and Multi-Criteria Analysis of Fly Ash Concrete Pavements

Authors: Marcela Ondova, Adriana Estokova

Abstract:

Rapid industrialization results in increased use of natural resources bring along serious ecological and environmental imbalance due to the dumping of industrial wastes. Principles of sustainable construction have to be accepted with regard to the consumption of natural resources and the production of harmful emissions. Cement is a great importance raw material in the building industry and today is its large amount used in the construction of concrete pavements. Concerning raw materials cost and producing CO2 emission the replacing of cement in concrete mixtures with more sustainable materials is necessary. To reduce this environmental impact people all over the world are looking for a solution. Over a period of last ten years, the image of fly ash has completely been changed from a polluting waste to resource material and it can solve the major problems of cement use. Fly ash concretes are proposed as a potential approach for achieving substantial reductions in cement. It is known that it improves the workability of concrete, extends the life cycle of concrete roads, and reduces energy use and greenhouse gas as well as amount of coal combustion products that must be disposed in landfills. Life cycle assessment also proved that a concrete pavement with fly ash cement replacement is considerably more environmentally friendly compared to standard concrete roads. In addition, fly ash is cheap raw material, and the costs saving are guaranteed. The strength properties, resistance to a frost or de-icing salts, which are important characteristics in the construction of concrete pavements, have reached the required standards as well. In terms of human health it can´t be stated that a concrete cover with fly ash could be dangerous compared with a cover without fly ash. Final Multi-criteria analysis also pointed that a concrete with fly ash is a clearly proper solution.

Keywords: life cycle assessment, fly ash, waste, concrete pavements

Procedia PDF Downloads 406
4807 Quantitative Proteome Analysis and Bioactivity Testing of New Zealand Honeybee Venom

Authors: Maryam Ghamsari, Mitchell Nye-Wood, Kelvin Wang, Angela Juhasz, Michelle Colgrave, Don Otter, Jun Lu, Nazimah Hamid, Thao T. Le

Abstract:

Bee venom, a complex mixture of peptides, proteins, enzymes, and other bioactive compounds, has been widely studied for its therapeutic application. This study investigated the proteins present in New Zealand (NZ) honeybee venom (BV) using bottom-up proteomics. Two sample digestion techniques, in-solution digestion and filter-aided sample preparation (FASP), were employed to obtain the optimal method for protein digestion. Sequential Window Acquisition of All Theoretical Mass Spectra (SWATH–MS) analysis was conducted to quantify the protein compositions of NZ BV and investigate variations in collection years. Our results revealed high protein content (158.12 µg/mL), with the FASP method yielding a larger number of identified proteins (125) than in-solution digestion (95). SWATH–MS indicated melittin and phospholipase A2 as the most abundant proteins. Significant variations in protein compositions across samples from different years (2018, 2019, 2021) were observed, with implications for venom's bioactivity. In vitro testing demonstrated immunomodulatory and antioxidant activities, with a viable range for cell growth established at 1.5-5 µg/mL. The study underscores the value of proteomic tools in characterizing bioactive compounds in bee venom, paving the way for deeper exploration into their therapeutic potentials. Further research is needed to fractionate the venom and elucidate the mechanisms of action for the identified bioactive components.

Keywords: honeybee venom, proteomics, bioactivity, fractionation, swath-ms, melittin, phospholipase a2, new zealand, immunomodulatory, antioxidant

Procedia PDF Downloads 39
4806 Astragaioside IV Inhibits Type2 Allergic Contact Dermatitis in Mice and the Mechanism Through TLRs-NF-kB Pathway

Authors: Xiao Wei, Dandan Sheng, Xiaoyan Jiang, Lili Gui, Huizhu Wang, Xi Yu, Hailiang Liu, Min Hong

Abstract:

Objective: Mice Type2 allergic contact dermatitis was utilized in this study to explore the effect of AS-IV on Type 2 allergic inflammatory. Methods: The mice were topically sensitized on the shaved abdomens with 1.5% FITC solution on abdominal skin in the day 1 and day 2 and elicited on the right ear with 0.5% FITC solution at day 6. Mice were treated with either AS-IV or normal saline from day 1 to day 5 (induction phase). Auricle swelling was measured 24 h after the elicitation. Ear pathohistological examination was carried out by HE staining. IL-4\IL-13, and IL-9 levels of ear tissue were detected by ELISA. Mice were treated with AS-IV at the initial stage of induction phase, ear tissue was taked at day 3.TSLP level of ear tissue was detected by ELISA and TSLPmRNA\NF-kBmRNA\TLRs(TLR2\TLR3\TLR8\TLR9)mRNA were detected by PCR. Results: AS-IV induction phase evidently inhibited the auricle inflam-mation of the models; pathohistological results indicated that AS-IV induction phase alleviated local edema and angiectasis of mice models and reduced lymphocytic infiltration. AS-IV induction phase markedly decreased IL-4\IL-13, and IL-9 levels in ear tissue. Moreover, at the initial stage of induction pha-se, AS-IV significantly reduced TSLP\TSLPmRNA\NF-kBmRNA\TLR2mRNA\TLR8 mRNA levels in ear tissue. Conclusion: Administration with AS-IV in induction phase could inhibit Type 2 allergic contact dermatitis in mice significantly, and the mechanism may be related with regulating TSLP through TLRs-NF-kB pathway.

Keywords: Astragaioside IV, allergic contact dermatitis, TSLP, interleukin-4, interleukin-13, interleukin-9

Procedia PDF Downloads 431
4805 Increasing Photosynthetic H2 Production by in vivo Expression of Re-Engineered Ferredoxin-Hydrogenase Fusion Protein in the Green Alga Chlamydomonas reinhardtii

Authors: Dake Xiong, Ben Hankamer, Ian Ross

Abstract:

The most urgent challenge of our time is to replace the depleting resources of fossil fuels by sustainable environmentally friendly alternatives. Hydrogen is a promising CO2-neutral fuel for a more sustainable future especially when produced photo-biologically. Hydrogen can be photosynthetically produced in unicellular green alga like Chlamydomonas reinhardtii, catalysed by the inducible highly active and bidirectional [FeFe]-hydrogenase enzymes (HydA). However, evolutionary and physiological constraints severely restrict the hydrogen yield of algae for industrial scale-up, mainly due to its competition among other metabolic pathways on photosynthetic electrons. Among them, a major challenge to be resolved is the inferior competitiveness of hydrogen production (catalysed by HydA) with NADPH production (catalysed by ferredoxin-NADP+-reductase (FNR)), which is essential for cell growth and takes up ~95% of photosynthetic electrons. In this work, the in vivo hydrogen production efficiency of mutants with ferredoxin-hydrogenase (Fd*-HydA1*) fusion protein construct, where the electron donor ferredoxin (Fd*) is fused to HydA1* and expressed in the model organism C. reinhardtii was investigated. Once Fd*-HydA1* fusion gene is expressed in algal cells, the fusion enzyme is able to draw the redistributed photosynthetic electrons and use them for efficient hydrogen production. From preliminary data, mutants with Fd*-HydA1* transgene showed a ~2-fold increase in the photosynthetic hydrogen production rate compared with its parental strain, which only possesses the native HydA in vivo. Therefore, a solid method of having more efficient hydrogen production in microalgae can be achieved through the expression of the synthetic enzymes.

Keywords: Chlamydomonas reinhardtii, ferredoxin, fusion protein, hydrogen production, hydrogenase

Procedia PDF Downloads 262
4804 Physics-Informed Convolutional Neural Networks for Reservoir Simulation

Authors: Jiangxia Han, Liang Xue, Keda Chen

Abstract:

Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.

Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation

Procedia PDF Downloads 143
4803 Making Permanent Supportive Housing Work for Vulnerable Populations

Authors: Olayinka Ariba, Abe Oudshoorn, Steve Rolfe, Carrie Anne Marshall, Deanna Befus, Jason Gilliland, Miranda Crockett, Susana Caxaj, Sarah McLean, Amy Van Berkum, Natasha Thuemler

Abstract:

Background: Secure housing is a platform for health and well-being. Those who struggle with housing stability have complex life and health histories and often require some support services such as the provision of permanent supportive housing. Poor access to supportive resources creates an exacerbation of chronic homelessness, particularly affecting individuals who need immediate access to mental health and addiction supports. This paper presents the first phase of a three-part study examining how on-site support impacts housing stability for recently-re-housed persons. Method: This study utilized a community-based participatory research methodology. Twenty in-depth interviews were conducted with permanent supportive housing residents from a single-site dwelling. Interpretative description analysis was used to draw common themes and understand the experiences and challenges of housing support. Results: Three interconnected themes were identified: 1) Available and timely supports; 2) Affordability; and 3) Community, but with independence as desired. These interconnected components are helping residents transition from homelessness or long-term mental health inpatient care to live in the community. Despite some participant concerns about resident conflicts, staff availability, and affordability, this has been a welcome and successful move for most. Conclusion: Supportive housing is essential for successful tenancies as a platform for health and well-being among Canada’s most vulnerable and, from the perspective of persons recently re-housed, permanent supportive housing is a worthwhile investment.

Keywords: homelessness, supportive housing, rehoused, housing stability

Procedia PDF Downloads 106
4802 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 108
4801 Unpacking the Summarising Event in Trauma Emergencies: The Case of Pre-briefings

Authors: Professor Jo Angouri, Polina Mesinioti, Chris Turner

Abstract:

In order for a group of ad-hoc professional to perform as a team, a shared understanding of the problem at hand and an agreed action plan are necessary components. This is particularly significant in complex, time sensitive professional settings such as in trauma emergencies. In this context, team briefings prior to the patient arrival (pre-briefings) constitute a critical event for the performance of the team; they provide the necessary space for co-constructing a shared understanding of the situation through summarising information available to the team: yet the act of summarising is widely assumed in medical practice but not systematically researched. In the vast teamwork literature, terms such as ‘shared mental model’, ‘mental space’ and ‘cognate labelling’ are used extensively, and loosely, to denote the outcome of the summarising process, but how exactly this is done interactionally remains under researched. This paper reports on the forms and functions of pre-briefings in a major trauma centre in the UK. Taking an interactional approach, we draw on 30 simulated and real-life trauma emergencies (15 from each dataset) and zoom in on the use of pre-briefings, which we consider focal points in the management of trauma emergencies. We show how ad hoc teams negotiate sharedness of future orientation through summarising, synthesising information, and establishing common understanding of the situation. We illustrate the role, characteristics, and structure of pre-briefing sequences that have been evaluated as ‘efficient’ in our data and the impact (in)effective pre-briefings have on teamwork. Our work shows that the key roles in the event own the act of summarising and we problematise the implications for leadership in trauma emergencies. We close the paper with a model for pre-briefing and provide recommendations for clinical practice, arguing that effective pre-briefing practice is teachable.

Keywords: summarising, medical emergencies, interaction analysis, shared/mental models

Procedia PDF Downloads 94
4800 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)

Authors: Eliane G. Tótoli, Hérida Regina N. Salgado

Abstract:

Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.

Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region

Procedia PDF Downloads 381
4799 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect

Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar

Abstract:

In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.

Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect

Procedia PDF Downloads 271
4798 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers

Authors: Onur Kaya, Halit Bayer

Abstract:

Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.

Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing

Procedia PDF Downloads 247
4797 A Typology System to Diagnose and Evaluate Environmental Affordances

Authors: Falntina Ahmad Alata, Natheer Abu Obeid

Abstract:

This paper is a research report of an experimental study on a proposed typology system to diagnose and evaluate the affordances of varying architectural environments. The study focused on architectural environments which have been developed with a shift in their use of adaptive reuse. The novelty in the newly developed environments was tested in terms of human responsiveness and interaction using a variety of selected cases. The study is a follow-up on previous research by the same authors, in which a typology of 16 categories of environmental affordances was developed and introduced. The current study introduced other new categories, which together with the previous ones establish what could be considered a basic language of affordance typology. The experiment was conducted on ten architectural environments while adopting two processes: 1. Diagnostic process, in which the environments were interpreted in terms of their affordances using the previously developed affordance typology, 2. The evaluation process, in which the diagnosed environments were evaluated using measures of emotional experience and architectural evaluation criteria of beauty, economy and function. The experimental study demonstrated that the typology system was capable of diagnosing different environments in terms of their affordances. It also introduced new categories of human interaction: “multiple affordances,” “conflict affordances,” and “mix affordances.” The different possible combinations and mixtures of categories demonstrated to be capable of producing huge numbers of other newly developed categories. This research is an attempt to draw a roadmap for designers to diagnose and evaluate the affordances within different architectural environments. It is hoped to provide future guidance for developing the best possible adaptive reuse according to the best affordance category within their proposed designs.

Keywords: affordance theory, affordance categories, architectural environments, architectural evaluation criteria, adaptive reuse environment, emotional experience, shift in use environment

Procedia PDF Downloads 193
4796 Immediate Life Support to a Wild Barn Owl (Tyto alba)

Authors: Bilge Kaan Tekelioglu, Mehmet Celik, Mahmut Ali Gokce, Ladine Celik, Yusuf Uzun

Abstract:

A male mature barn owl (Tyto alba) was brought to Cukurova University Ceyhan Veterinary Medicine Faculty at the beginning of January 2017. The bird was found at a local state elementary school’s garden where had been terribly damaged by metal wires. On the clinical examination, the animal was in shock and atonic position at arrival and seems to have feather problems and severe injuries. The ears, eyes, claws and wounded areas were checked and no signs of viral, microbial or ecto-parasitic infection were observed. The bird has been declared by U.S. wild life Office as endangered species. At first, the owl was kept in silent, warm and darkened cabinet against shock and warmed fluid replacement was started by % 5 dextrose solution per orally. On the second day, we started per oral forced feeding with chicken flesh meat dipped into the dextrose solution. On the third day, the bird was continued to be fed with fresh meat. At the fourth day, the owl was started to be fed with chicks during the next 3 days died by natural means which has been supplied by a local breeder. At the first 3 days 1 chick per day and the following days 2 chicks per day has been given per orally. The tenth day we started flying exercises in a small and non-windowed room safely. The saved owl was kept in this room for 10 more days. Finally, the owl was released at the habitation where it had been found injured. This study has one more time proved that, if you save one, you can save more. Wild life is in danger all over the world. Every living creature has right and deserves a chance to live.

Keywords: wild life, barn owl, Tyto alba, rescue, life support, feeding

Procedia PDF Downloads 358
4795 Treatment of NMSC with Traditional Medicine Method

Authors: Aferdita Stroka Koka, Laver Stroka, Juna Musa, Samanda Celaj

Abstract:

Non-melanoma skin cancers (NMSCs) are the most common human malignancies. About 5.4 million basal and squamous cell skin cancers are diagnosed each year in the US and new cases continue to grow. About eight out of ten of these are basal cell cancers. Squamous cell cancers occur less often. NMSC usually are treatable, but treatment is expensive and can leave scars. In 2019, 167 patients of both sexes suffering from NMSC were treated by traditional medicine. Patients who have been diagnosed with Basal Cell Carcinoma were 122 cases, Squamous Cell Carcinoma 32 cases and both of them 13 cases. Of these,122 cases were ulcerated lesions and 45 unulcerated lesions. All patients were treated with the herbal solution called NILS, which contains extracts of some Albanian plants such as Allium sativum, Jugulans regia and Laurus nobilis. The treatment is done locally, on the surface of the tumor, applying the solution until the tumor mass is destroyed and, after that, giving the necessary time to the wound to make the regeneration that coincides with the complete healing of the wound. We have prepared a collection of photos for each case. Since the first sessions, a shrinkage and reduction of the tumor mass were evident, up to the total disappearance of the lesion at the end of treatment. The normal period of treatment lasted 1 to 2 weeks, depending on the size of the tumor, then take care of it until the closure of the wound, taking the whole process from 1 to 3 months. In 7 patients, the lesion failed to be dominated by treatment and they underwent standard treatment with radiotherapy or surgery, while in 10 patients, the lesion recurred and was treated again. The aim of this survey was to put in evidence the good results obtained by treatment of NMSC with Albanian traditional medicine methods.

Keywords: local treatment, nils, NMSC, traditional medicine

Procedia PDF Downloads 210
4794 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 128
4793 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 275
4792 Seed Quality Aspects of Nightshade (Solanum Nigrum) as Influenced by Gibberellins (GA3) on Seed

Authors: Muga Moses

Abstract:

Plant growth regulators are actively involved in the growth and yield of plants. However, limited information is available on the combined effect of gibberellic acid (GA3) on growth attributes and yield of African nightshade. This experiment will be designed to fill this gap by studying the performance of African nightshade under the application of hormones. Gibberellic acid is a plant growth hormone that promotes cell expansion and division. A greenhouse and laboratory experiment will be conducted at the University of Sussex biotechnology greenhouse and Agriculture laboratory using a growth chamber to study the effect of GA3 on the growth and development attributes of African nightshade. The experiment consists of three replications and 5 treatments and is laid out in a randomized complete block design consisting of various concentrations of GA3. 0ppm, 50ppm, 100ppm, 150ppm and 200ppm. local farmer seed was grown in plastic pots, 6 seeds then hardening off to remain with four plants per pot at the greenhouse to attain purity of germplasm, proper management until maturity of berries then harvesting and squeezing to get seeds, paper dry on the sun for 7 days. In a laboratory, place 5 Whatman filter paper on glass petri-dish subject to different concentrations of stock solution, count 50 certified and clean, healthy seeds, then arrange on the moist filter paper and mark respectively. Spray with the stock solution twice a day and protrusion of radicle termed as germination count and discard to increase the accuracy of precision. Data will be collected on the application of GA3 to compare synergistic effects on the growth, yield, and nutrient contents on African nightshade.

Keywords: African nightshade, growth, yield, shoot, gibberellins

Procedia PDF Downloads 88
4791 Preparation of Electrospun PLA/ENR Fibers

Authors: Jaqueline G. L. Cosme, Paulo H. S. Picciani, Regina C. R. Nunes

Abstract:

Electrospinning is a technique for the fabrication of nanoscale fibers. The general electrospinning system consists of a syringe filled with polymer solution, a syringe pump, a high voltage source and a grounded counter electrode. During electrospinning a volumetric flow is set by the syringe pump and an electric voltage is applied. This forms an electric potential between the needle and the counter electrode (collector plate), which results in the formation of a Taylor cone and the jet. The jet is moved towards the lower potential, the counter electrode, wherein the solvent of the polymer solution is evaporated and the polymer fiber is formed. On the way to the counter electrode, the fiber is accelerated by the electric field. The bending instabilities that occur form a helical loop movements of the jet, which result from the coulomb repulsion of the surface charge. Trough bending instabilities the jet is stretched, so that the fiber diameter decreases. In this study, a thermoplastic/elastomeric binary blend of non-vulcanized epoxidized natural rubber (ENR) and poly(latic acid) (PLA) was electrospun using polymer solutions consisting of varying proportions of PCL and NR. Specifically, 15% (w/v) PLA/ENR solutions were prepared in /chloroform at proportions of 5, 10, 25, and 50% (w/w). The morphological and thermal properties of the electrospun mats were investigated by scanning electron microscopy (SEM) and differential scanning calorimetry analysis. The SEM images demonstrated the production of micrometer- and sub-micrometer-sized fibers with no bead formation. The blend miscibility was evaluated by thermal analysis, which showed that blending did not improve the thermal stability of the systems.

Keywords: epoxidized natural rubber, poly(latic acid), electrospinning, chemistry

Procedia PDF Downloads 410
4790 Sustainable Housing in Steel: Prospects for Future World of Developing Countries

Authors: Poorva Kulkarni

Abstract:

Developing countries are having significant additions to existing population of urban areas with loads of migrants from rural areas. There is a tremendous need to provide accommodation facility to cater to rapidly growing urban population. This leads to unprecedented growth in urban areas since the temporary shelters are constructed with any available material. Architecture in a broader sense serves to humanity in terms of making life of people happy and comfortable by providing comfortable shelters. It is also the need of the time for an architect to be extremely sensitive towards nature by providing design solution of human shelters with minimum impact on the environment. The sensitive approach towards designing of housing units and provision of comfortable and affordable housing units should go hand in hand for future growth of developing countries. Steel has proved itself a versatile material in terms of strength, uniformity and ease of operation and many such other advantages. Steel can be used as the most promising material for modern construction practices. The current research paper focuses on how effectively steel can be used probably in combination with other construction material to achieve the mentioned objectives for sustainable housing. The research available on sustainable housing in steel is studied along with few case studies of buildings with the efficient use of steel providing a solution with affordability and minimum harm to the environment. The research will conclude the effective solutions exploring possibilities of use of steel for sustainable housing units. The researcher shows how the use of steel in combination with other materials for human shelters can promote sustainable housing for community living which is the need of the time.

Keywords: community living, steel, sustainable housing, urban area

Procedia PDF Downloads 227
4789 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden

Authors: Suchhanda Ghosh, A. K. Paul

Abstract:

Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.

Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden

Procedia PDF Downloads 191
4788 On the Well-Posedness of Darcy–Forchheimer Power Model Equation

Authors: Johnson Audu, Faisal Fairag

Abstract:

In a bounded subset of R^d, d=2 or 3, we consider the Darcy-Forchheimer power model with the exponent 1 < m ≤ 2 for a single-phase strong-inertia fluid flow in a porous medium. Under necessary compatibility condition, and some mild regularity assumptions on the interior and the boundary data, we prove the existence and uniqueness of solution (u, p) in L^(m+1 ) (Ω)^d X (W^(1,(m+1)/m) (Ω)^d ⋂L_0^2 (Ω)^d) and its stability.

Keywords: porous media, power law, strong inertia, nonlinear, monotone type

Procedia PDF Downloads 317
4787 Conservative and Surgical Treatment of Antiresorptive Drug-Related Osteonecrosis of the Jaw with Ultrasonic Piezoelectric Bone Surgery under Polyvinylpyrrolidone Iodine Irrigation: A Case Series of 13 Treated Sites

Authors: Esra Yuce, Isil D. S. Yamaner, Murude Yazan

Abstract:

Aims and objective: Antiresorptive agents including bisphosphonates and denosumab as strong suppressors of osteoclasts are the most commonly used antiresorptive medications for the treatment of osteoporosis which counteract the negative quantitative alteration of trabecular and cortical bone by inhibition of bone turnover. Oral bisphosphonate therapy for the treatment of osteopenia, osteoporosis or Paget's disease is associated with the low-grade risk of osteonecrosis of the jaw, while higher-grade risk is associated with receiving intravenous bisphosphonates therapy in the treatment of multiple myeloma and bone metastases. On the other hand, there has been a remarkable increase in incidences of antiresorptive related osteonecrosis of the jaw (ARONJ) in oral bisphosphonate users. This clinical presentation will evaluate the healing outcomes via piezoelectric bone surgery under the irrigation of PVP-I solution irrigation in patients received bisphosphonate therapy. Material-Method: The study involved 8 female and 5 male patients that have been treated for ARONJ. Among 13 necrotic sites, 9 were in the mandible and 4 were in the maxilla. All of these 13 patients treated with surgical debridement via piezoelectric bone surgery under irrigation by solution with 3% PVP-I concentration in combination with long-term antibiotic therapy and 5 also underwent removal of mobile segments of bony sequestrum. All removable prosthesis in 8 patients were relined with soft liners during the healing periods in order to eliminate chronic minor traumas. Results: All patients were on oral bisphosphonate therapy for at least 2 years and 5 of which had received intravenous bisphosphonates up to 1 year before therapy with oral bisphosphonates was started. According to the AAOMS staging system, four cases were stage II, eight cases were stage I, and one case was stage III. The majority of lesions were identified at sites of dental prostheses (38%) and dental extractions (62%). All patients diagnosed with ARONJ stage I had used unadjusted removable prostheses. No recurrence of the symptoms was observed during the present follow-up (9–37 months). Conclusion: Despite their confirmed effectiveness, the prevention and treatment of osteonecrosis of the jaw secondary to oral bisphosphonate therapy remain major medical challenges. Treatment with piezoelectric bone surgery with irrigation of povidone-iodine solution was effective for management of bisphosphonate-related osteonecrosis of the jaw. Taking precautions for patients treated with oral bisphosphonates, especially also denture users, may allow for a reduction in the rate of developing osteonecrosis of the maxillofacial region.

Keywords: antiresorptive drug related osteonecrosis, bisphosphonate therapy, piezoelectric bone surgery, povidone iodine

Procedia PDF Downloads 267
4786 Causes for the Precession of the Perihelion in the Planetary Orbits

Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim

Abstract:

It is Leverrier that discovered the precession of the perihelion in the planetary orbits for the first time in the world, while it is Einstein that explained the astronomical phenomenom for the first time in the world. The amount of the precession of the perihelion for Einstein’s theory of gravitation has been explained by means of the inverse fourth power force(inverse third power potential) introduced totheory of gravitation through Schwarzschild metric However, the methodology has a serious shortcoming that it is impossible to explain the cause for the precession of the perihelion in the planetary orbits. According to our study, without taking the cause for the precession of the perihelion, 6 methods can explain the amount of the precession of the perihelion discovered by Leverrier. Therefore, the problem of what caused the perihelion to precess in the planetary orbits must be solved for physics because it is a profound scientific and technological problem for a basic experiment in construction of relativistic theory of gravitation. The scientific solution to the problem proved that Einstein’s explanation for the planetary orbits is a magic made by the numerical expressions obtained from fictitious gravitation introduced to theory of gravitation and wrong definition of proper time The problem of the precession of the perihelion seems solved already by means of general theory of relativity, but, in essence, the cause for the astronomical phenomenon has not been successfully explained for astronomy yet. The right solution to the problem comes from generalized theory of gravitation. Therefore, in this paper, it has been shown that by means of Schwarzschild field and the physical quantities of relativistic Lagrangian redflected in it, fictitious gravitation is not the main factor which can cause the perihelion to precess in the planetary orbits. In addition to it, it has been shown that the main factor which can cause the perihelion to precess in the planetary orbits is the inverse third power force existing really in the relativistic region in the Solar system.

Keywords: inverse third power force, precession of the perihelion, fictitious gravitation, planetary orbits

Procedia PDF Downloads 11