Search results for: Philippe Constant
364 Development of Optimized Eye Mascara Packages with Bioinspired Spiral Methodology
Authors: Daniela Brioschi, Rovilson Mafalda, Silvia Titotto
Abstract:
In the present days, packages are considered a fundamental element in the commercialization of products and services. A good package is capable of helping to attract new customers and also increasing a product’s purchase intent. In this scenario, packaging design emerges as an important tool, since products and design of their packaging are so interconnected that they are no longer seen as separate elements. Packaging design is, in fact, capable of generating desire for a product. The packaging market for cosmetics, especially makeup market, has also been experiencing an increasing level of sophistication and requirements. Considering packaging represents an important link of communication with the final user and plays a significant role on the sales process, it is of great importance that packages accomplish not only with functional requirements but also with the visual appeal. One of the possibilities for the design of packages and, in this context, packages for make-up, is the bioinspired design – or biomimicry. The bio-inspired design presents a promising paradigm for innovation in both design and sustainable design, by using biological system analogies to develop solutions. It has gained importance as a widely diffused movement in design for environmentally conscious development and is also responsible for several useful and innovative designs. As eye mascara packages are also part of the constant evolution on the design for cosmetics area and the traditional packages present the disadvantage of product drying along time, this project aims to develop a new and innovative package for this product, by using a selected bioinspired design methodology during the development process and also suitable computational tools. In order to guide the development process of the package, it was chosen the spiral methodology, conceived by The Biomimicry Institut, which consists of a reliable tool, since it was based on traditional design methodologies. The spiral design comprises identification, translation, discovery, abstraction, emulation and evaluation steps, that can work iteratively as the process develops as a spiral. As support tool for packaging, 3D modelling is being used by the software Inventor Autodesk Inventor 2018. Although this is an ongoing research, first results showed that spiral methodology design, together with Autodesk Inventor, consist of suitable instruments for the bio-inspired design process, and also nature proved itself to be an amazing and inexhaustible source of inspiration.Keywords: bio-inspired design, design methodology, packaging, cosmetics
Procedia PDF Downloads 188363 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 5% Using Patch Test
Authors: Sule Pallavi, Shah Priyank, Thavkar Amit, Mehta Suyog, Rohira Poonam
Abstract:
Minoxidil is used topically to help hair growth in the treatment of male androgenetic alopecia. The objective of this study is to compare irritation potential of three conventional formulation of minoxidil 5% topical solution of in human patch test. The study was a single centre, double blind, non-randomized controlled study in 56 healthy adult Indian subjects. Occlusive patch test for 24 hours was performed with three formulation of minoxidil 5% topical solution. Products tested included aqueous based minoxidil 5% (AnasureTM 5%, Sun Pharma, India – Brand A), alcohol based minoxidil 5% (Brand B) and aqueous based minoxidil 5% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate were included as negative control and positive control respectively. Patches were applied and removed after 24hours. The skin reaction was assessed and clinically scored 24 hours after the removal of the patches under constant artificial daylight source using Draize scale (0-4 points scale for erythema/wrinkles/dryness and for oedema). A combined mean score up to 2.0/8.0 indicates a product is “non-irritant” and score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and score above 4.0/8.0 indicates “irritant”. Follow-up was scheduled after one week to confirm recovery for any reaction. The procedure of the patch test followed the principles outlined by Bureau of Indian standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). Fifty six subjects with mean age 30.9 years (27 males and 29 females) participated in the study. The combined mean score (± standard deviation) were: 0.13 ± 0.33 (Brand A), 0.39 ± 0.49 (Brand B), 0.22 ± 0.41 (Brand C), 2.91 ± 0.79 (Positive control) and 0.02 ± 0.13 (Negative control). The mean score of Brand A (Sun Pharma product) was significantly lower than Brand B (p=0.001) and was comparable with Brand C (p=0.21). The combined mean erythema score (± standard deviation) were: 0.09 ± 0.29 (Brand A), 0.27 ± 0.5 (Brand B), 0.18 ± 0.39 (Brand C), 2.02 ± 0.49 (Positive control) and 0.0 ± 0.0 (Negative control). The mean erythema score of Brand A was significantly lower than Brand B (p=0.01) and was comparable with Brand C (p=0.16). Any reaction observed at 24hours after patch removal subsided in a week. All the three topical formulation of minoxidil 5% were non-irritant. Brand A of 5% minoxidil (Sun Pharma) was found to be least irritant than Brand B and Brand C based on the combined mean score and mean erythema score in the human patch test as per the BIS, IS 4011;2018.Keywords: erythema, irritation, minoxidil, patch test
Procedia PDF Downloads 96362 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India
Authors: Senthil Kumar Anantharaman
Abstract:
Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints
Procedia PDF Downloads 164361 Effect of Packing Ratio on Fire Spread across Discrete Fuel Beds: An Experimental Analysis
Authors: Qianqian He, Naian Liu, Xiaodong Xie, Linhe Zhang, Yang Zhang, Weidong Yan
Abstract:
In the wild, the vegetation layer with exceptionally complex fuel composition and heterogeneous spatial distribution strongly affects the rate of fire spread (ROS) and fire intensity. Clarifying the influence of fuel bed structure on fire spread behavior is of great significance to wildland fire management and prediction. The packing ratio is one of the key physical parameters describing the property of the fuel bed. There is a threshold value of the packing ratio for ROS, but little is known about the controlling mechanism. In this study, to address this deficiency, a series of fire spread experiments were performed across a discrete fuel bed composed of some regularly arranged laser-cut cardboards, with constant wind speed and different packing ratios (0.0125-0.0375). The experiment aims to explore the relative importance of the internal and surface heat transfer with packing ratio. The dependence of the measured ROS on the packing ratio was almost consistent with the previous researches. The data of the radiative and total heat fluxes show that the internal heat transfer and surface heat transfer are both enhanced with increasing packing ratio (referred to as ‘Stage 1’). The trend agrees well with the variation of the flame length. The results extracted from the video show that the flame length markedly increases with increasing packing ratio in Stage 1. Combustion intensity is suggested to be increased, which, in turn, enhances the heat radiation. The heat flux data shows that the surface heat transfer appears to be more important than the internal heat transfer (fuel preheating inside the fuel bed) in Stage 1. On the contrary, the internal heat transfer dominates the fuel preheating mechanism when the packing ratio further increases (referred to as ‘Stage 2’) because the surface heat flux keeps almost stable with the packing ratio in Stage 2. As for the heat convection, the flow velocity was measured using Pitot tubes both inside and on the upper surface of the fuel bed during the fire spread. Based on the gas velocity distribution ahead of the flame front, it is found that the airflow inside the fuel bed is restricted in Stage 2, which can reduce the internal heat convection in theory. However, the analysis indicates not the influence of inside flow on convection and combustion, but the decreased internal radiation of per unit fuel is responsible for the decrease of ROS.Keywords: discrete fuel bed, fire spread, packing ratio, wildfire
Procedia PDF Downloads 142360 Incidence of Orphans Neonatal Puppies Attend in Veterinary Hospital – Causes, Consequences and Mortality
Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, João C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado
Abstract:
Orphaned is a risk factor for mortality in newborns since it is a condition with total or partial absence of maternal care that is essential for neonatal survival, including nursing (nutrition, the transference of passive immunity and hydration), warmth, urination, and defecation stimuli, and protection. The most common causes of mortality in orphans are related to lack of assistance, handling mistakes and infections. This study aims to describe the orphans rates in neonatal puppies, the main causes, and the mortality rates. The study included 735 neonates admitted to the Sao Paulo State University (UNESP) Veterinary Hospital, Botucatu, Sao Paulo, Brazil, between January 2018 and November 2019. The orphans rate was 43.4% (319/735) of all neonates included, and the main causes for orphaned were related to maternal agalactia/hypogalactia (23.5%, 75/319); numerous litter (15.7%, 50/319), toxic milk syndrome due to maternal mastitis (14.4%, 46/319), absence of suction/weak neonate (12.2%, 39/319), maternal disease (9.4%, 30/319), cleft palate/lip (6.3%, 20/319), maternal death (5.9%, 19/319), prematurity (5.3%, 17/319), rejection/failure in maternal instinct (3.8%, 12/319) and abandonment by the owner/separation of mother and neonate (3.5%, 11/319). The main consequences of orphaned observed in the admitted neonates were hypoglycemia, hypothermia, dehydration, aspiration pneumonia, wasting syndrome, failure in the transference of passive immunity, infections and sepsis, which happened due to failure of identifying the problem early, lack of adequate assistance, negligence and handling mistakes by the owner. The total neonatal mortality rate was 8% (59/735) and the neonatal mortality rate among orphans was 18.5% (59/319). The orphaned and mortality rates were considered high, but even higher rates may be observed in locations without adequate neonatal assistance and owner orientation. The survival of these patients is related to constant monitoring of the litter, early diagnosis and assistance, and the implementation of effective handling for orphans. Understanding the correct handling for neonates and instructing the owners regarding proper handling are essential to minimize the consequences of orphaned and the mortality rates.Keywords: orphans, neonatal care, puppies, newborn dogs
Procedia PDF Downloads 258359 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods
Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke
Abstract:
Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing
Procedia PDF Downloads 129358 An Inexhaustible Will of Infinite, or the Creative Will in the Psychophysiological Artistic Practice: An Analysis through Nietzsche's Will to Power
Authors: Filipa Cruz, Grecia P. Matos
Abstract:
An Inexhaustible Will of Infinite is ongoing practice-based research focused on a psychophysiological conception of body and on the creative will that seeks to examine the possibility of art being simultaneously a pacifier and an intensifier in a physiological artistic production. This is a study where philosophy and art converge in a commentary on the affection of the concept of will to power in the art world through Nietzsche’s commentaries, through the analysis of case studies and a reflection arising from artistic practice. Through Nietzsche, it is sought to compare concepts that communicate with the artistic practice since creation is an intensification and engenders perspectives. It is also a practice highly embedded in the body, in the non-verbal, in the physiology of art and in the coexistence between the sensorial and the thought. It is questioned if the physiology of art could be thought of as a thinking-feeling with no primacy of the thought over the sensorial. Art as a manifestation of the will to power participates in a comprehension of the world. In this article, art is taken as a privileged way of communication – implicating corporeal-sensorial-conceptual – and of connection between humans. Problematized is the dream and the drunkenness as intensifications and expressions of life’s comprehension. Therefore, art is perceived as suggestion and invention, where the artistic intoxication breaks limits in the experience of life, and the artist, dominated by creative forces, claims, orders, obeys, proclaims love for life. The intention is also to consider how one can start from pain to create and how one can generate new and endless artistic forms through nightmares, daydreams, impulses, intoxication, enhancement, intensification in a plurality of subjects and matters. It is taken into consideration the fact that artistic creation is something that is intensified corporeally, expanded, continuously generated and acting on bodies. It is inextinguishable and a constant movement intertwining Apollonian and Dionysian instincts of destruction and creation of new forms. The concept of love also appears associated with conquering, that, in a process of intensification and drunkenness, impels the artist to generate and to transform matter. Just like a love relationship, love in Nietzsche requires time, patience, effort, courage, conquest, seduction, obedience, and command, potentiating the amplification of knowledge of the other / the world. Interlacing Nietzsche's philosophy, not with Modern Art, but with Contemporary Art, it is argued that intoxication, will to power (strongly connected with the creative will) and love still have a place in the artistic production as creative agents.Keywords: artistic creation, body, intensification, psychophysiology, will to power
Procedia PDF Downloads 119357 Activated Carbon Content Influence in Mineral Barrier Performance
Authors: Raul Guerrero, Sandro Machado, Miriam Carvalho
Abstract:
Soil and aquifer pollution, caused by hydrocarbon liquid spilling, is induced by misguided operational practices and inefficient safety guidelines. According to the Environmental Brazilian Institute (IBAMA), during 2013 alone, over 472.13 m3 of diesel oil leaked into the environment nationwide for those reported cases only. Regarding the aforementioned information, there’s an indisputable need to adopt appropriate environmental safeguards specially in those areas intended for the production, treatment, transportation and storage of hydrocarbon fluids. According to Brazilian norm, ABNT-NBR 7505-1:2000, compacted soil or mineral barriers used in structural contingency levees, such as storage tanks, are required to present a maximum water permeability coefficient, k, of 1x10-6 cm/s. However, as discussed by several authors, water can not be adopted as the reference fluid to determine the site’s containment performance against organic fluids. Mainly, due to the great discrepancy observed in polarity values (dielectric constant) between water and most organic fluids. Previous studies, within this same research group, proposed an optimal range of values for the soil’s index properties for mineral barrier composition focused on organic fluid containment. Unfortunately, in some circumstances, it is not possible to encounter a type of soil with the required geotechnical characteristics near the containment site, increasing prevention and construction costs, as well as environmental risks. For these specific cases, the use of an organic product or material as an additive to enhance mineral-barrier containment performance may be an attractive geotechnical solution. This paper evaluates the effect of activated carbon (AC) content additions into a clayey soil towards hydrocarbon fluid permeability. Variables such as compaction energy, carbon texture and addition content (0%, 10% and 20%) were analyzed through laboratory falling-head permeability tests using distilled water and commercial diesel as percolating fluids. The obtained results showed that the AC with smaller particle-size reduced k values significantly against diesel, indicating a direct relationship between particle-size reduction (surface area increase) of the organic product and organic fluid containment.Keywords: activated carbon, clayey soils, permeability, surface area
Procedia PDF Downloads 256356 Structure and Magnetic Properties of M-Type Sr-Hexaferrite with Ca, La Substitutions
Authors: Eun-Soo Lim, Young-Min Kang
Abstract:
M-type Sr-hexaferrite (SrFe₁₂O₁₉) have been studied during the past decades because it is the most utilized materials in permanent magnets due to their low price, outstanding chemical stability, and appropriate hard magnetic properties. Many attempts have been made to improve the intrinsic magnetic properties of M-type Sr-hexaferrites (SrM), such as by improving the saturation magnetization (MS) and crystalline anisotropy by cation substitution. It is well proved that the Ca-La-Co substitutions are one of the most successful approaches, which lead to a significant enhancement in the crystalline anisotropy without reducing MS, and thus the Ca-La-Co-doped SrM have been commercialized in high-grade magnet products. In this research, the effect of respective doping of Ca and La into the SrM lattices were studied with assumptions that these elements could substitute both of Fe and Sr sites. The hexaferrite samples of stoichiometric SrFe₁₂O₁₉ (SrM) and the Ca substituted SrM with formulae of Sr₁₋ₓCaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) and SrFe₁₂₋ₓCaₓOₐ (x = 0.1, 0.2, 0.3, 0.4), and also La substituted SrM of Sr₁₋ₓLaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) and SrFe₁₂₋ₓLaₓOₐ (x = 0.1, 0.2, 0.3, 0.4) were prepared by conventional solid state reaction processes. X-ray diffraction (XRD) with a Cu Kα radiation source (λ=0.154056 nm) was used for phase analysis. Microstructural observation was conducted with a field emission scanning electron microscopy (FE-SEM). M-H measurements were performed using a vibrating sample magnetometer (VSM) at 300 K. Almost pure M-type phase could be obtained in the all series of hexaferrites calcined at > 1250 ºC. Small amount of Fe₂O₃ phases were detected in the XRD patterns of Sr₁₋ₓCaₓFe₁₂Oₐ (x = 0.2, 0.3, 0.4) and Sr₁₋ₓLaₓFe₁₂Oₐ (x = 0.1, 0.2, 0.3, 0.4) samples. Also, small amount of unidentified secondary phases without the Fe₂O₃ phase were found in the samples of SrFe₁₂₋ₓCaₓOₐ (x = 0.4) and SrFe₁₂₋ₓLaₓOₐ (x = 0.3, 0.4). Although the Ca substitution (x) into SrM structure did not exhibit a clear tendency in the cell parameter change in both series of samples, Sr₁₋ₓCaₓFe₁₂Oₐ and SrFe₁₂₋ₓCaₓOₐ , the cell volume slightly decreased with doping of Ca in the Sr₁₋ₓCaₓFe₁₂Oₐ samples and increased in the SrFe₁₂₋ₓCaₓOₐ samples. Considering relative ion sizes between Sr²⁺ (0.113 nm), Ca²⁺ (0.099 nm), Fe³⁺ (0.064 nm), these results imply that the Ca substitutes both of Sr and Fe in the SrM. A clear tendency of cell parameter change was observed in case of La substitution into Sr site of SrM ( Sr₁₋ₓLaₓFe₁₂Oₐ); the cell volume decreased with increase of x. It is owing to the similar but smaller ion size of La³⁺ (0.106 nm) than that of Sr²⁺. In case of SrFe₁₂₋ₓLaₓOₐ, the cell volume first decreased at x = 0.1 and then remained almost constant with increase of x from 0.2 to 0.4. These results mean that La only substitutes Sr site in the SrM structure. Besides, the microstructure and magnetic properties of these samples, and correlation between them will be revealed.Keywords: M-type hexaferrite, substitution, cell parameter, magnetic properties
Procedia PDF Downloads 211355 Melaninic Discrimination among Primary School Children
Authors: Margherita Cardellini
Abstract:
To our knowledge, dark skinned children are often victims of discrimination from adults and society, but few studies specifically focus on skin color discrimination on children coming from the same children. Even today, the 'color blind children' ideology is widespread among adults, teachers, and educators and maybe also among scholars, which seem really careful about study expressions of racism in childhood. This social and cultural belief let people think that all the children, because of their age and their brief experience in the world, are disinterested in skin color. Sometimes adults think that children are even incapable of perceiving skin colors and that it could be dangerous to talk about melaninic differences with them because they finally could notice this difference, producing prejudices and racism. Psychology and neurology research projects are showing for many years that even the newborns are already capable of perceiving skin color and ethnic differences by the age of 3 months. Starting from this theoretical framework we conducted a research project to understand if and how primary school children talk about skin colors, picking up any stereotypes or prejudices. Choosing to use the focus group as a methodology to stimulate the group dimension and interaction, several stories about skin color discrimination's episodes within their classroom or school have emerged. Using the photo elicitation technique we chose to stimulate talk about the research object, which is the skin color, asking the children what was ‘the first two things that come into your mind’ when they look the photographs presented during the focus group, which represented dark and light skinned women and men. So, this paper will present some of these stories about episodes of discrimination with an escalation grade of proximity related to the discriminatory act. It will be presented a story of discrimination happened within the school, in an after-school daycare, in the classroom and even episode of discrimination that children tell during the focus groups in the presence of the discriminated child. If it is true that the Declaration of the Right of the Child state that every child should be discrimination free, it’s also true that every adult should protect children from every form of discrimination. How, as adults, can we defend children against discrimination if we cannot admit that even children are potential discrimination’s actors? Without awareness, we risk to devalue these episodes, implicitly confident that the only way to fight against discrimination is to keep her quiet. The right not to be discriminated goes through the right to talk about its own experiences of discrimination and the right to perceive the unfairness of the constant depreciation about skin color or any element of physical diversity. Intercultural education could act as spokesperson for this mission in the belief that difference and plurality could really become elements of potential enrichment for humanity, starting from children.Keywords: colorism, experiences of discrimination, primary school children, skin color discrimination
Procedia PDF Downloads 196354 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model
Procedia PDF Downloads 280353 Temperamental Determinants of Eye-Hand Coordination Formation in the Special Aerial Gymnastics Instruments (SAGI)
Authors: Zdzisław Kobos, Robert Jędrys, Zbigniew Wochyński
Abstract:
Motor activity and good health are sine qua non determinants of a proper practice of the profession, especially aviation. Therefore, candidates to the aviation are selected according their psychomotor ability by both specialist medical commissions. Moreover, they must past an examination of the physical fitness. During the studies in the air force academy, eye-hand coordination is formed in two stages. The future aircraft pilots besides all-purpose physical education must practice specialist training on SAGI. Training includes: looping, aerowheel, and gyroscope. Aim of the training on the above listed apparatuses is to form eye-hand coordination during the tasks in the air. Such coordination is necessary to perform various figures in the real flight. Therefore, during the education of the future pilots, determinants of the effective ways of this important parameter of the human body functioning are sought for. Several studies of the sport psychology indicate an important role of the temperament as a factor determining human behavior during the task performance and acquiring operating skills> Polish psychologist Jan Strelau refers to the basic, relatively constant personality features which manifest themselves in the formal characteristics of the human behavior. Temperament, being initially determined by the inborn physiological mechanisms, changes in the course of maturation and some environmental factors and concentrates on the energetic level and reaction characteristics in time. Objectives. This study aimed at seeking a relationship between temperamental features and eye-hand coordination formation during training on SAGI. Material and Methods: Group of 30 students of pilotage was examined in two situations. The first assessment of the eye-hand coordination level was carried out before the beginning of a 30-hour training on SAGI. The second assessment was carried out after training completion. Training lasted for 2 hours once a week. Temperament was evaluated with The Formal Characteristics of Behavior − Temperament Inventory (FCB-TI) developed by Bogdan Zawadzki and Jan Strelau. Eye-hand coordination was assessed with a computer version of the Warsaw System of Psychological Tests. Results: It was found that the training on SAGI increased the level of eye-hand coordination in the examined students. Conclusions: Higher level of the eye-hand coordination was obtained after completion of the training. Moreover, a relationship between eye-hand coordination level and selected temperamental features was statistically significant.Keywords: temperament, eye-hand coordination, pilot, SAGI
Procedia PDF Downloads 440352 Increasing Access to Upper Limb Reconstruction in Cervical Spinal Cord Injury
Authors: Michelle Jennett, Jana Dengler, Maytal Perlman
Abstract:
Background: Cervical spinal cord injury (SCI) is a devastating event that results in upper limb paralysis, loss of independence, and disability. People living with cervical SCI have identified improvement of upper limb function as a top priority. Nerve and tendon transfer surgery has successfully restored upper limb function in cervical SCI but is not universally used or available to all eligible individuals. This exploratory mixed-methods study used an implementation science approach to better understand these factors that influence access to upper limb reconstruction in the Canadian context and design an intervention to increase access to care. Methods: Data from the Canadian Institute for Health Information’s Discharge Abstracts Database (CIHI-DAD) and the National Ambulatory Care Reporting System (NACRS) were used to determine the annual rate of nerve transfer and tendon transfer surgeries performed in cervical SCI in Canada over the last 15 years. Semi-structured interviews informed by the consolidated framework for implementation research (CFIR) were used to explore Ontario healthcare provider knowledge and practices around upper limb reconstruction. An inductive, iterative constant comparative process involving descriptive and interpretive analyses was used to identify themes that emerged from the data. Results: Healthcare providers (n = 10 upper extremity surgeons, n = 10 SCI physiatrists, n = 12 physical and occupational therapists working with individuals with SCI) were interviewed about their knowledge and perceptions of upper limb reconstruction and their current practices and discussions around upper limb reconstruction. Data analysis is currently underway and will be presented. Regional variation in rates of upper limb reconstruction and trends over time are also currently being analyzed. Conclusions: Utilization of nerve and tendon transfer surgery to improve upper limb reconstruction in Canada remains low. There are a complex array of interrelated individual-, provider- and system-level barriers that prevent individuals with cervical SCI from accessing upper limb reconstruction. In order to offer equitable access to care, a multi-modal approach addressing current barriers is required.Keywords: cervical spinal cord injury, nerve and tendon transfer surgery, spinal cord injury, upper extremity reconstruction
Procedia PDF Downloads 97351 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System
Authors: Zhang Kehan, Du Luona
Abstract:
Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer
Procedia PDF Downloads 345350 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea
Authors: M. Y. Maila, H. P. Makhubele
Abstract:
Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.Keywords: dry roasting, legume, oil source, peanuts
Procedia PDF Downloads 287349 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 221348 An Experimental Study of Scalar Implicature Processing in Chinese
Authors: Liu Si, Wang Chunmei, Liu Huangmei
Abstract:
A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language
Procedia PDF Downloads 361347 The Impact of Task Type and Group Size on Dialogue Argumentation between Students
Authors: Nadia Soledad Peralta
Abstract:
Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.Keywords: sociocognitive interaction, argumentation, university students, size of the grup
Procedia PDF Downloads 83346 Syngas From Polypropylene Gasification in a Fluidized Bed
Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo
Abstract:
In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle
Procedia PDF Downloads 27345 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana
Authors: Jemima C. A. Sumboh
Abstract:
Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage
Procedia PDF Downloads 137344 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 120343 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field
Authors: Buruk Kitachew Wossenyeleh
Abstract:
Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation
Procedia PDF Downloads 152342 A Fine String between Weaving the Text and Patching It: Reading beyond the Hidden Symbols and Antithetical Relationships in the Classical and Modern Arabic Poetry
Authors: Rima Abu Jaber-Bransi, Rawya Jarjoura Burbara
Abstract:
This study reveals the extension and continuity between the classical Arabic poetry and modern Arabic poetry through investigation of its ambiguity, symbolism, and antithetical relationships. The significance of this study lies in its exploration and discovering of a new method of reading classical and modern Arabic poetry. The study deals with the Fatimid poetry and discovers a new method to read it. It also deals with the relationship between the apparent and the hidden meanings of words through focusing on how the paradoxical antithetical relationships change the meaning of the whole poem and give it a different dimension through the use of Oxymorons. In our unprecedented research on Oxymoron, we found out that the words in modern Arabic poetry are used in unusual combinations that convey apparent and hidden meanings. In some cases, the poet introduces an image with a symbol of a certain thing, but the reader soon discovers that the symbol includes its opposite, too. The question is: How does the reader find that hidden harmony in that apparent disharmony? The first and most important conclusion of this study is that the Fatimid poetry was written for two types of readers: religious readers who know the religious symbols and the hidden secret meanings behind the words, and ordinary readers who understand the apparent literal meaning of the words. Consequently, the interpretation of the poem is subject to the type of reading. In Fatimid poetry we found out that the hunting-journey is a journey of hidden esoteric knowledge; the Horse is al-Naqib, a religious rank of the investigator and missionary; the Lion is Ali Ibn Abi Talib. The words black and white, day and night, bird, death and murder have different meanings and indications. Our study points out the importance of reading certain poems in certain periods in two different ways: the first depends on a doctrinal interpretation that transforms the external apparent (ẓāher) meanings into internal inner hidden esoteric (bāṭen) ones; the second depends on the interpretation of antithetical relationships between the words in order to reveal meanings that the poet hid for a reader who participates in the processes of creativity. The second conclusion is that the classical poem employed symbols, oxymora and antonymous and antithetical forms to create two poetic texts in one mold and form. We can conclude that this study is pioneering in showing the constant paradoxical relationship between the apparent and the hidden meanings in classical and modern Arabic poetry.Keywords: apparent, symbol, hidden, antithetical, oxymoron, Sophism, Fatimid poetry
Procedia PDF Downloads 262341 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process
Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian
Abstract:
The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass
Procedia PDF Downloads 410340 Fabrication of Al/Al2O3 Functionally Graded Composites via Centrifugal Method by Using a Polymeric Suspension
Authors: Majid Eslami
Abstract:
Functionally graded materials (FGMs) exhibit heterogeneous microstructures in which the composition and properties gently change in specified directions. The common type of FGMs consist of a metal in which ceramic particles are distributed with a graded concentration. There are many processing routes for FGMs. An important group of these methods is casting techniques (gravity or centrifugal). However, the main problem of casting molten metal slurry with dispersed ceramic particles is a destructive chemical reaction between these two phases which deteriorates the properties of the materials. In order to overcome this problem, in the present investigation a suspension of 6061 aluminum and alumina powders in a liquid polymer was used as the starting material and subjected to centrifugal force for making FGMs. The size rang of these powders was 45-63 and 106-125 μm. The volume percent of alumina in the Al/Al2O3 powder mixture was in the range of 5 to 20%. PMMA (Plexiglas) in different concentrations (20-50 g/lit) was dissolved in toluene and used as the suspension liquid. The glass mold contaning the suspension of Al/Al2O3 powders in the mentioned liquid was rotated at 1700 rpm for different times (4-40 min) while the arm length was kept constant (10 cm) for all the experiments. After curing the polymer, burning out the binder, cold pressing and sintering , cylindrical samples (φ=22 mm h=20 mm) were produced. The density of samples before and after sintering was quantified by Archimedes method. The results indicated that by using the same sized alumina and aluminum powders particles, FGM sample can be produced by rotation times exceeding 7 min. However, by using coarse alumina and fine alumina powders the sample exhibits step concentration. On the other hand, using fine alumina and coarse alumina results in a relatively uniform concentration of Al2O3 along the sample height. These results are attributed to the effects of size and density of different powders on the centrifugal force induced on the powders during rotation. The PMMA concentration and the vol.% of alumina in the suspension did not have any considerable effect on the distribution of alumina particles in the samples. The hardness profiles along the height of samples were affected by both the alumina vol.% and porosity content. The presence of alumina particles increased the hardness while increased porosity reduced the hardness. Therefore, the hardness values did not show the expected gradient in same sample. The sintering resulted in decreased porosity for all the samples investigated.Keywords: FGM, powder metallurgy, centrifugal method, polymeric suspension
Procedia PDF Downloads 211339 Correlation Between the Toxicity Grade of the Adverse Effects in the Course of the Immunotherapy of Lung Cancer and Efficiency of the Treatment in Anti-PD-L1 and Anti-PD-1 Drugs - Own Clinical Experience
Authors: Anna Rudzińska, Katarzyna Szklener, Pola Juchaniuk, Anna Rodzajweska, Katarzyna Machulska-Ciuraj, Monika Rychlik- Grabowska, Michał łOziński, Agnieszka Kolak-Bruks, SłAwomir Mańdziuk
Abstract:
Introduction: Immune checkpoint inhibition (ICI) belongs to the modern forms of anti-cancer treatment. Due to the constant development and continuous research in the field of ICI, many aspects of the treatment are yet to be discovered. One of the less researched aspects of ICI treatment is the influence of the adverse effects on the treatment success rate. It is suspected that adverse events in the course of the ICI treatment indicate a better response rate and correlate with longer progression-free- survival. Methodology: The research was conducted with the usage of the documentation of the Department of Clinical Oncology and Chemotherapy. Data of the patients with a lung cancer diagnosis who were treated between 2019-2022 and received ICI treatment were analyzed. Results: Out of over 133 patients whose data was analyzed, the vast majority were diagnosed with non-small cell lung cancer. The majority of the patients did not experience adverse effects. Most adverse effects reported were classified as grade 1 or grade 2 according to CTCAE classification. Most adverse effects involved skin, thyroid and liver toxicity. Statistical significance was found for the adverse effect incidence and overall survival (OS) and progression-free survival (PFS) (p=0,0263) and for the time of toxicity onset and OS and PFS (p<0,001). The number of toxicity sites was statistically significant for prolonged PFS (p=0.0315). The highest OS was noted in the group presenting grade 1 and grade 2 adverse effects. Conclusions: Obtained results confirm the existence of the prolonged OS and PFS in the adverse-effects-charged patients, mostly in the group presenting mild to intermediate (Grade 1 and Grade 2) adverse effects and late toxicity onset. Simultaneously our results suggest a correlation between treatment response rate and the toxicity grade of the adverse effects and the time of the toxicity onset. Similar results were obtained in several similar research conducted - with the proven tendency of better survival in mild and moderate toxicity; meanwhile, other studies in the area suggested an advantage in patients with any toxicity regardless of the grade. The contradictory results strongly suggest the need for further research on this topic, with a focus on additional factors influencing the course of the treatment.Keywords: adverse effects, immunotherapy, lung cancer, PD-1/PD-L1 inhibitors
Procedia PDF Downloads 91338 Production of Rhamnolipids from Different Resources and Estimating the Kinetic Parameters for Bioreactor Design
Authors: Olfat A. Mohamed
Abstract:
Rhamnolipids biosurfactants have distinct properties given them importance in many industrial applications, especially their great new future applications in cosmetic and pharmaceutical industries. These applications have encouraged the search for diverse and renewable resources to control the cost of production. The experimental results were then applied to find a suitable mathematical model for obtaining the design criteria of the batch bioreactor. This research aims to produce Rhamnolipids from different oily wastewater sources such as petroleum crude oil (PO) and vegetable oil (VO) by using Pseudomonas aeruginosa ATCC 9027. Different concentrations of the PO and the VO are added to the media broth separately are in arrangement (0.5 1, 1.5, 2, 2.5 % v/v) and (2, 4, 6, 8 and 10%v/v). The effect of the initial concentration of oil residues and the addition of glycerol and palmitic acid was investigated as an inducer in the production of rhamnolipid and the surface tension of the broth. It was found that 2% of the waste (PO) and 6% of the waste (VO) was the best initial substrate concentration for the production of rhamnolipids (2.71, 5.01 g rhamnolipid/l) as arrangement. Addition of glycerol (10-20% v glycerol/v PO) to the 2% PO fermentation broth led to increase the rhamnolipid production (about 1.8-2 times fold). However, the addition of palmitic acid (5 and 10 g/l) to fermentation broth contained 6% VO rarely enhanced the production rate. The experimental data for 2% initially (PO) was used to estimate the various kinetic parameters. The following results were obtained, maximum rate or velocity of reaction (Vmax) = 0.06417 g/l.hr), yield of cell weight per unit weight of substrate utilized (Yx/s = 0.324 g Cx/g Cs) maximum specific growth rate (μmax = 0.05791 hr⁻¹), yield of rhamnolipid weight per unit weight of substrate utilized (Yp/s)=0.2571gCp/g Cs), maintenance coefficient (Ms =0.002419), Michaelis-Menten constant, (Km=6.1237 gmol/l), endogenous decay coefficient (Kd=0.002375 hr⁻¹). Predictive parameters and advanced mathematical models were applied to evaluate the time of the batch bioreactor. The results were as follows: 123.37, 129 and 139.3 hours in respect of microbial biomass, substrate and product concentration, respectively compared with experimental batch time of 120 hours in all cases. The expected mathematical models are compatible with the laboratory results and can, therefore, be considered as tools for expressing the actual system.Keywords: batch bioreactor design, glycerol, kinetic parameters, petroleum crude oil, Pseudomonas aeruginosa, rhamnolipids biosurfactants, vegetable oil
Procedia PDF Downloads 131337 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data
Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez
Abstract:
Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.Keywords: breast cancer, incidence, cancer registries, castilla-la mancha
Procedia PDF Downloads 311336 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia
Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva
Abstract:
In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.Keywords: value, risk, creation, problems, organization
Procedia PDF Downloads 284335 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.
Procedia PDF Downloads 17