Search results for: growth behavior
628 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide
Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu
Abstract:
This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide
Procedia PDF Downloads 237627 Predictors of Motor and Cognitive Domains of Functional Performance after Rehabilitation of Individuals with Acute Stroke
Authors: A. F. Jaber, E. Dean, M. Liu, J. He, D. Sabata, J. Radel
Abstract:
Background: Stroke is a serious health care concern and a major cause of disability in the United States. This condition impacts the individual’s functional ability to perform daily activities. Predicting functional performance of people with stroke assists health care professionals in optimizing the delivery of health services to the affected individuals. The purpose of this study was to identify significant predictors of Motor FIM and of Cognitive FIM subscores among individuals with stroke after discharge from inpatient rehabilitation (typically 4-6 weeks after stroke onset). A second purpose is to explore the relation among personal characteristics, health status, and functional performance of daily activities within 2 weeks of stroke onset. Methods: This study used a retrospective chart review to conduct a secondary analysis of data obtained from the Healthcare Enterprise Repository for Ontological Narration (HERON) database. The HERON database integrates de-identified clinical data from seven different regional sources including hospital electronic medical record systems of the University of Kansas Health System. The initial HERON data extract encompassed 1192 records and the final sample consisted of 207 participants who were mostly white (74%) males (55%) with a diagnosis of ischemic stroke (77%). The outcome measures collected from HERON included performance scores on the National Institute of Health Stroke Scale (NIHSS), the Glasgow Coma Scale (GCS), and the Functional Independence Measure (FIM). The data analysis plan included descriptive statistics, Pearson correlation analysis, and Stepwise regression analysis. Results: significant predictors of discharge Motor FIM subscores included age, baseline Motor FIM subscores, discharge NIHSS scores, and comorbid electrolyte disorder (R2 = 0.57, p <0.026). Significant predictors of discharge Cognitive FIM subscores were age, baseline cognitive FIM subscores, client cooperative behavior, comorbid obesity, and the total number of comorbidities (R2 = 0.67, p <0.020). Functional performance on admission was significantly associated with age (p < 0.01), stroke severity (p < 0.01), and length of hospital stay (p < 0.05). Conclusions: our findings show that younger age, good motor and cognitive abilities on admission, mild stroke severity, fewer comorbidities, and positive client attitude all predict favorable functional outcomes after inpatient stroke rehabilitation. This study provides health care professionals with evidence to evaluate predictors of favorable functional outcomes early at stroke rehabilitation, to tailor individualized interventions based on their client’s anticipated prognosis, and to educate clients about the benefits of making lifestyle changes to improve their anticipated rate of functional recovery.Keywords: functional performance, predictors, stroke, recovery
Procedia PDF Downloads 144626 Telogen Effluvium: A Modern Hair Loss Concern and the Interventional Strategies
Authors: Chettyparambil Lalchand Thejalakshmi, Sonal Sabu Edattukaran
Abstract:
Hair loss is one of the main issues that contemporary society is dealing with. It can be attributable to a wide range of factors, listing from one's genetic composition and the anxiety we experience on a daily basis. Telogen effluvium [TE] is a condition that causes temporary hair loss after a stressor that might shock the body and cause the hair follicles to temporarily rest, leading to hair loss. Most frequently, women are the ones who bring up these difficulties. Extreme illness or trauma, an emotional or important life event, rapid weight loss and crash dieting, a severe scalp skin problem, a new medication, or ceasing hormone therapy are examples of potential causes. Men frequently do not notice hair thinning with time, but women with long hair may be easily identified when shedding, which can occasionally result in bias because women tend to be more concerned with aesthetics and beauty standards of the society, and approach frequently with the concerns .The woman, who formerly possessed a full head of hair, is worried about the hair loss from her scalp . There are several cases of hair loss reported every day, and Telogen effluvium is said to be the most prevalent one of them all without any hereditary risk factors. While the patient has loss in hair volume, baldness is not the result of this problem . The exponentially growing Dermatology and Aesthetic medical division has discovered that this problem is the most common and also the easiest to cure since it is feasible for these people to regrow their hair, unlike those who have scarring alopecia, in which the follicle itself is damaged and non-viable. Telogen effluvium comes in two different forms: acute and chronic. Acute TE occurs in all the age groups with a hair loss of less than three months, while chronic TE is more common in those between the ages of 30 and 60 with a hair loss of more than six months . Both kinds are prevalent throughout all age groups, regardless of the predominance. It takes between three and six months for the lost hair to come back, although this condition is readily reversed by eliminating stresses. After shedding their hair, patients frequently describe having noticeable fringes on their forehead. The current medical treatments for this condition include topical corticosteroids, systemic corticosteroids, minoxidil and finasteride, CNDPA (caffeine, niacinamide, panthenol, dimethicone, and an acrylate polymer) .Individual terminal hair growth was increased by 10% as a result of the innovative intervention CNDPA. Botulinum Toxin A, Scalp Micro Needling, Platelet Rich Plasma Therapy [PRP], and sessions with Multivitamin Mesotherapy Injections are some recently enhanced techniques with partially or completely reversible hair loss. Also, it has been shown that supplements like Nutrafol and Biotin are producing effective outcomes. There is virtually little evidence to support the claim that applying sulfur-rich ingredients to the scalp, such as onion juice, can help TE patients' hair regenerate.Keywords: dermatology, telogen effluvium, hair loss, modern hair loass treatments
Procedia PDF Downloads 90625 Empowering Leaders: Strategies for Effective Management in a Changing World
Authors: Shahid Ali
Abstract:
Leadership and management are essential components of running successful organizations. Both concepts are closely related but serve different purposes in the overall management of a company. Leadership focuses on inspiring and motivating employees towards a common goal, while management involves coordinating and directing resources to achieve organizational objectives efficiently. Objectives of Leadership and Management: Inspiring and motivating employees: A key objective of leadership is to inspire and motivate employees to work towards achieving the organization’s goals. Effective leaders create a vision that employees can align with and provide the necessary motivation to drive performance. Setting goals and objectives: Both leadership and management play a crucial role in setting goals and objectives for the organization. Leaders create a vision for the future, while managers develop plans to achieve specific objectives within the given timeframe. Implementing strategies: Leaders come up with innovative strategies to drive the organization forward, while managers are responsible for implementing these strategies effectively. Together, leadership and management ensure that the organization’s plans are executed efficiently. Contributions of Leadership and Management: Employee Engagement: Effective leadership and management can increase employee engagement and satisfaction. When employees feel motivated and inspired by their leaders, they are more likely to be engaged in their work and contribute to the organization’s success. Organizational Success: Good leadership and management are essential for navigating the challenges and changes that organizations face. By setting clear goals, inspiring employees, and making strategic decisions, leaders and managers can drive organizational success. Talent Development: Leaders and managers are responsible for identifying and developing talent within the organization. By providing feedback, training, and coaching, they can help employees reach their full potential and contribute effectively to the organization. Research Type: The research on leadership and management is typically quantitative and qualitative in nature. Quantitative research involves the collection and analysis of numerical data to understand the impact of leadership and management practices on organizational outcomes. This type of research often uses surveys, questionnaires, and statistical analysis to measure variables such as employee satisfaction, performance, and organizational success. Qualitative research, on the other hand, involves exploring the subjective experiences and perspectives of individuals related to leadership and management. This type of research may include interviews, observations, and case studies to gain a deeper understanding of how leadership and management practices influence organizational behavior and outcomes. In conclusion, leadership and management play a critical role in the success of organizations. Through effective leadership and management practices, organizations can inspire and motivate employees, set goals, and implement strategies to achieve their objectives. Research on leadership and management helps to understand the impact of these practices on organizational outcomes and provides valuable insights for improving leadership and management practices in the future.Keywords: empowering, leadership, management, adaptability
Procedia PDF Downloads 50624 Investigating the Key Success Factors of Supplier Collaboration Governance in the Aerospace Industry
Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz
Abstract:
In the industrial sector collaboration with suppliers is key to the development of innovations in the field of processes. Access to resources and expertise that are not available in the business, obtaining a cost advantage, or the reduction of the time needed to carry out innovation are some of the benefits associated with the process. However, the success of this collaborative process is compromised, when from the beginning not clearly rules have been established that govern the relationship. Abundant studies developed in the field of innovation emphasize the strategic importance of the concept of “Governance”. Despite this, there have been few papers that have analyzed how the governance process of the relationship must be designed and managed to ensure the success of the collaboration process. The lack of literature in this area responds to the wide diversity of contexts where collaborative processes to innovate take place. Thus, in sectors such as the car industry there is a strong collaborative tradition between manufacturers and suppliers being part of the value chain. In this case, it is common to establish mechanisms and procedures that fix formal and clear objectives to regulate the relationship, and establishes the rights and obligations of each of the parties involved. By contrast, in other sectors, collaborative relationships to innovate are not a common way of working, particularly when their aim is the development of process improvements. It is in this case, it is when the lack of mechanisms to establish and regulate the behavior of those involved, can give rise to conflicts, and the failure of the cooperative relationship. Because of this the present paper analyzes the similarities and differences in the processes of governance in collaboration with suppliers in the European aerospace industry With these ideas in mind, we present research is twofold: Understand the importance of governance as a key element of the success of the collaboration in the development of product and process innovations, Establish the mechanisms and procedures to ensure the proper management of the processes of collaboration. Following the methodology of the case study, we analyze the way in which manufacturers and suppliers cooperate in the development of new products and processes in two industries with different levels of technological intensity and collaborative tradition: the automotive and aerospace. The identification of those elements playing a key role to establish a successful governance and relationship management and the compression of the mechanisms of regulation and control in place at the automotive sector can be use to propose solutions to some of the conflicts that currently arise in aerospace industry. The paper concludes by analyzing the strategic implications for the aerospace industry entails the adoption of some of the practices traditionally used in other industrial sectors. Finally, it is important to highlight that in this paper are presented the first results of a research project currently in progress describing a model of governance that explains the way to manage outsourced services to suppliers in the European aerospace industry, through the analysis of companies in the sector located in Germany, France and Spain.Keywords: supplier collaboration, supplier relationship governance, innovation management, product innovation, process innovation
Procedia PDF Downloads 459623 Transient Heat Transfer: Experimental Investigation near the Critical Point
Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut
Abstract:
In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators
Procedia PDF Downloads 221622 Improvement of Oxidative Stability of Edible Oil by Microencapsulation Using Plant Proteins
Authors: L. Le Priol, A. Nesterenko, K. El Kirat, K. Saleh
Abstract:
Introduction and objectives: Polyunsaturated fatty acids (PUFAs) omega-3 and omega-6 are widely recognized as being beneficial to the health and normal growth. Unfortunately, due to their highly unsaturated nature, these molecules are sensitive to oxidation and thermic degradation leading to the production of toxic compounds and unpleasant flavors and smells. Hence, it is necessary to find out a suitable way to protect them. Microencapsulation by spray-drying is a low-cost encapsulation technology and most commonly used in the food industry. Many compounds can be used as wall materials, but there is a growing interest in the use of biopolymers, such as proteins and polysaccharides, over the last years. The objective of this study is to increase the oxidative stability of sunflower oil by microencapsulation in plant protein matrices using spray-drying technique. Material and methods: Sunflower oil was used as a model substance for oxidable food oils. Proteins from brown rice, hemp, pea, soy and sunflower seeds were used as emulsifiers and microencapsulation wall materials. First, the proteins were solubilized in distilled water. Then, the emulsions were pre-homogenized using a high-speed homogenizer (Ultra-Turrax) and stabilized by using a high-pressure homogenizer (HHP). Drying of the emulsion was performed in a Mini Spray Dryer. The oxidative stability of the encapsulated oil was determined by performing accelerated oxidation tests with a Rancimat. The size of the microparticles was measured using a laser diffraction analyzer. The morphology of the spray-dried microparticles was acquired using environmental scanning microscopy. Results: Pure sunflower oil was used as a reference material. Its induction time was 9.5 ± 0.1 h. The microencapsulation of sunflower oil in pea and soy protein matrices significantly improved its oxidative stability with induction times of 21.3 ± 0.4 h and 12.5 ± 0.4 h respectively. The encapsulation with hemp proteins did not significantly change the oxidative stability of the encapsulated oil. Sunflower and brown rice proteins were ineffective materials for this application, with induction times of 7.2 ± 0.2 h and 7.0 ± 0.1 h respectively. The volume mean diameter of the microparticles formulated with soy and pea proteins were 8.9 ± 0.1 µm and 16.3 ± 1.2 µm respectively. The values for hemp, sunflower and brown rice proteins could not be obtained due to the agglomeration of the microparticles. ESEM images showed smooth and round microparticles with soy and pea proteins. The surfaces of the microparticles obtained with sunflower and hemp proteins were porous. The surface was rough when brown rice proteins were used as the encapsulating agent. Conclusion: Soy and pea proteins appeared to be efficient wall materials for the microencapsulation of sunflower oil by spray drying. These results were partly explained by the higher solubility of soy and pea proteins in water compared to hemp, sunflower, and brown rice proteins. Acknowledgment: This work has been performed, in partnership with the SAS PIVERT, within the frame of the French Institute for the Energy Transition (Institut pour la Transition Energétique (ITE)) P.I.V.E.R.T. (www.institut-pivert.com) selected as an Investments for the Future (Investissements d’Avenir). This work was supported, as part of the Investments for the Future, by the French Government under the reference ANR-001-01.Keywords: biopolymer, edible oil, microencapsulation, oxidative stability, release, spray-drying
Procedia PDF Downloads 137621 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27620 Structure Domains Tuning Magnetic Anisotropy and Motivating Novel Electric Behaviors in LaCoO₃ Films
Authors: Dechao Meng, Yongqi Dong, Qiyuan Feng, Zhangzhang Cui, Xiang Hu, Haoliang Huang, Genhao Liang, Huanhua Wang, Hua Zhou, Hawoong Hong, Jinghua Guo, Qingyou Lu, Xiaofang Zhai, Yalin Lu
Abstract:
Great efforts have been taken to reveal the intrinsic origins of emerging ferromagnetism (FM) in strained LaCoO₃ (LCO) films. However, some macro magnetic performances of LCO are still not well understood and even controversial, such as magnetic anisotropy. Determining and understanding magnetic anisotropy might help to find the true causes of FM in turn. Perpendicular magnetic anisotropy (PMA) was the first time to be directly observed in high-quality LCO films with different thickness. The in-plane (IP) and out of plane (OOP) remnant magnetic moment ratio of 30 unit cell (u.c.) films is as large as 20. The easy axis lays in the OOP direction with an IP/OOP coercive field ratio of 10. What's more, the PMA could be simply tuned by changing the thickness. With the thickness increases, the IP/OOP magnetic moment ratio remarkably decrease with magnetic easy axis changing from OOP to IP. Such a huge and tunable PMA performance exhibit strong potentials in fundamental researches or applications. What causes PMA is the first concern. More OOP orbitals occupation may be one of the micro reasons of PMA. A cluster-like magnetic domain pattern was found in 30 u.c. with no obvious color contrasts, similar to that of LaAlO₃/SrTiO₃ films. And the nanosize domains could not be totally switched even at a large OOP magnetic field of 23 T. It indicates strong IP characters or none OOP magnetism of some clusters. The IP magnetic domains might influence the magnetic performance and help to form PMA. Meanwhile some possible nonmagnetic clusters might be the reason why the measured moments of LCO films are smaller than the calculated values 2 μB/Co, one of the biggest confusions in LCO films.What tunes PMA seems much more interesting. Totally different magnetic domain patterns were found in 180 u.c. films with cluster magnetic domains surrounded by < 110 > cross-hatch lines. These lines were regarded as structure domain walls (DWs) determined by 3D reciprocal space mapping (RSM). Two groups of in-plane features with fourfold symmetry were observed near the film diffraction peaks in (002) 3D-RSM. One is along < 110 > directions with a larger intensity, which is well match the lines on the surfaces. The other is much weaker and along < 100 > directions, which is from the normal lattice titling of films deposited on cubic substrates. The < 110 > domain features obtained from (103) and (113) 3D-RSMs exhibit similar evolution of the DWs percentages and magnetic behavior. Structure domains and domain walls are believed to tune PMA performances by transform more IP magnetic moments to OOP. Last but not the least, thick films with lots of structure domains exhibit different electrical transport behaviors. A metal-to-insulator transition (MIT) and an angular dependent negative magnetic resistivity were observed near 150 K, higher than FM transition temperature but similar to that of spin-orbital coupling related 1/4 order diffraction peaks.Keywords: structure domain, magnetic anisotropy, magnetic domain, domain wall, 3D-RSM, strain
Procedia PDF Downloads 153619 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training
Procedia PDF Downloads 107618 The Effectiveness of Using Dramatic Conventions as the Teaching Strategy on Self-Efficacy for Children With Autism Spectrum Disorder
Authors: Tso Sheng-Yang, Wang Tien-Ni
Abstract:
Introduction and Purpose: Previous researchers have documented children with ASD (Autism Spectrum Disorders) prefer to escaping internal privates and external privates when they face tough conditions they can’t control or they don’t like.Especially, when children with ASD need to learn challenging tasks, such us Chinese language, their inappropriate behaviors will occur apparently. Recently, researchers apply positive behavior support strategies for children with ASD to enhance their self-efficacy and therefore to reduce their adverse behaviors. Thus, the purpose of this research was to design a series of lecture based on art therapy and to evaluate its effectiveness on the child’s self-efficacy. Method: This research was the single-case design study that recruited a high school boy with ASD. Whole research can be separated into three conditions. First, baseline condition, before the class started and ended, the researcher collected participant’s competencies of self-efficacy every session. In intervention condition, the research used dramatic conventions to teach the child in Chinese language twice a week.When the data was stable across three documents, the period entered to the maintenance condition. In maintenance condition, the researcher only collected the score of self-efficacynot to do other interventions five times a month to represent the effectiveness of maintenance.The time and frequency of data collection among three conditions are identical. Concerning art therapy, the common approach, e.g., music, drama, or painting is to use art medium as independent variable. Due to visual cues of art medium, the ASD can be easily to gain joint attention with teachers. Besides, the ASD have difficulties in understanding abstract objectives Thus, using the drama convention is helpful for the ASD to construct the environment and understand the context of Classical Chinese. By real operation, it can improve the ASD to understand the context and construct prior knowledge. Result: Bassd on the 10-points Likert scale and research, we product following results. (a) In baseline condition, the average score of self-efficacyis 1.12 points, rangedfrom 1 to 2 points, and the level change is 0 point. (b)In intervention condition, the average score of self-efficacy is 7.66 points rangedfrom 7 to 9 points, and the level change is 1 point. (c)In maintenance condition, the average score of self-efficacy is 6.66 points rangedfrom 6 to 7 points, and the level change is 1 point. Concerning immediacy of change, between baseline and intervention conditions, the difference is 5 points. No overlaps were found between these two conditions. Conclusion: According to the result, we find that it is effective that using dramatic conventions a s teaching strategies to teach children with ASD. The result presents the score of self-efficacyimmediately enhances when the dramatic conventions commences. Thus, we suggest the teacher can use this approach and adjust, based on the student’s trait, to teach the ASD on difficult task.Keywords: dramatic conventions, autism spectrum disorder, slef-efficacy, teaching strategy
Procedia PDF Downloads 83617 Experimental Field for the Study of Soil-Atmosphere Interaction in Soft Soils
Authors: Andres Mejia-Ortiz, Catalina Lozada, German R. Santos, Rafael Angulo-Jaramillo, Bernardo Caicedo
Abstract:
The interaction between atmospheric variables and soil properties is a determining factor when evaluating the flow of water through the soil. This interaction situation directly determines the behavior of the soil and greatly influences the changes that occur in it. The atmospheric variations such as changes in the relative humidity, air temperature, wind velocity and precipitation, are the external variables that reflect a greater incidence in the changes that are generated in the subsoil, as a consequence of the water flow in descending and ascending conditions. These environmental variations have a major importance in the study of the soil because the conditions of humidity and temperature in the soil surface depend on them. In addition, these variations control the thickness of the unsaturated zone and the position of the water table with respect to the surface. However, understanding the relationship between the atmosphere and the soil is a somewhat complex aspect. This is mainly due to the difficulty involved in estimating the changes that occur in the soil from climate changes; since this is a coupled process where act processes of mass transfer and heat. In this research, an experimental field was implemented to study in-situ the interaction between the atmosphere and the soft soils of the city of Bogota, Colombia. The soil under study consists of a 60 cm layer composed of two silts of similar characteristics at the surface and a deep soft clay deposit located under the silky material. It should be noted that the vegetal layer and organic matter were removed to avoid the evapotranspiration phenomenon. Instrumentation was carried on in situ through a field disposal of many measuring devices such as soil moisture sensors, thermocouples, relative humidity sensors, wind velocity sensor, among others; which allow registering the variations of both the atmospheric variables and the properties of the soil. With the information collected through field monitoring, the water balances were made using the Hydrus-1D software to determine the flow conditions that developed in the soil during the study. Also, the moisture profile for different periods and time intervals was determined by the balance supplied by Hydrus 1D; this profile was validated by experimental measurements. As a boundary condition, the actual evaporation rate was included using the semi-empirical equations proposed by different authors. In this study, it was obtained for the rainy periods a descending flow that was governed by the infiltration capacity of the soil. On the other hand, during dry periods. An increase in the actual evaporation of the soil induces an upward flow of water, increasing suction due to the decrease in moisture content. Also, cracks were developed accelerating the evaporation process. This work concerns to the study of soil-atmosphere interaction through the experimental field and it is a very useful tool since it allows considering all the factors and parameters of the soil in its natural state and real values of the different environmental conditions.Keywords: field monitoring, soil-atmosphere, soft soils, soil-water balance
Procedia PDF Downloads 137616 Law, Resistance, and Development in Georgia: A Case of Namakhvani HPP
Authors: Konstantine Eristavi
Abstract:
The paper will contribute to the discussion on the pitfalls, limits, and possibilities of legal and rights discourse in opposing large infrastructural projects in the context of neoliberal globalisation. To this end, the paper will analyse the struggle against the Namakhvani HPP project in Georgia. The latter has been hailed by the government as one of the largest energy projects in the history of the country, with an enormous potential impact on energy security, energy independence, economic growth, and development. This takes place against the backdrop of decades of market-led -or neoliberal- model of development in Georgia, characterised by structural adjustments, deregulation, privatisation, and Laissez-Fair approach to foreign investment. In this context, the Georgian state vies with other low and middle-income countries for foreign capital by offering to potential investors, on the one hand, exemptions from social and environmental regulations and, on the other hand, huge legal concessions and safeguards, thereby participating in what is often called a “race to the bottom.” The Namakhvani project is a good example of this. At every stage, the project has been marred with violations of laws and regulations concerning transparency, participation, social and environmental regulations, and so on. Moreover, the leaked contract between the state and the developer reveals the contractual safeguards which effectively insulate the investment throughout the duration of the contract from the changes in the national law that might adversely affect investors’ rights and returns. These clauses, aimed at preserving investors' economic position, place the contract above national law in many respects and even conflict with fundamental constitutional rights. In response to the perceived deficiencies of the project, one of the largest and most diverse social movements in the history of post-soviet Georgia has been assembled, consisting of the local population, conservative and leftist groups, human rights and environmental NGOs, etc. Crucially, the resistance movement is actively using legal tools. In order to analyse both the limitations and possibilities of legal discourse, the paper will distinguish between internal and immanent critiques. Law as internal critique, in the context of the struggles around the Namakhvani project, while potentially fruitful in hindering the project, risks neglecting and reproducing those factors -e.g., the particular model of development- that made such contractual concessions and safeguards and concomitant rights violations possible in the first place. On the other hand, the use of rights and law as part of immanent critique articulates a certain incapacity on the part of the addressee government to uphold existing laws and rights due to structural factors, hence, pointing to a need for a fundamental change. This 'ruptural' form of legal discourse that the movement employs makes it possible to go beyond the discussion around the breaches of law and enables a critical deliberation on the development model within which these violations and extraordinary contractual safeguards become necessary. It will be argued that it is this form of immanent critique that expresses the emancipatory potential of legal discourse.Keywords: law, resistance, development, rights
Procedia PDF Downloads 80615 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 129614 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric
Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff
Abstract:
When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.Keywords: coper, GISAXS, nanoparticles, plasmonics
Procedia PDF Downloads 123613 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 292612 Concentrations of Leptin, C-Peptide and Insulin in Cord Blood as Fetal Origins of Insulin Resistance and Their Effect on the Birth Weight of the Newborn
Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha
Abstract:
Obesity is associated with an increased risk of developing insulin resistance. Insulin resistance often progresses to type-2 diabetes mellitus and is linked to a wide variety of other pathophysiological features including hypertension, hyperlipidemia, atherosclerosis (metabolic syndrome) and polycystic ovarian syndrome. Macrosomia is common in infants born to not only women with gestational diabetes mellitus but also non-diabetic obese women. During the past two decades, obesity in children and adolescents has risen significantly in Asian populations including Sri Lanka. There is increasing evidence to believe that infants who are born large for gestational age (LGA) are more likely to be obese in childhood. It is also established from previous studies that Asian populations have higher percentage body fat at a lower body mass index compared to Caucasians. High leptin levels in cord blood have been reported to correlate with fetal adiposity at birth. Previous studies have also shown that cord blood C-peptide and insulin levels are significantly and positively correlated with birth weight. Therefore, the objective of this preliminary study was to determine the relationship between parameters of fetal insulin resistance such as leptin, C-peptide and insulin and the birth weight of the newborn in a study population in Southern Sri Lanka. Umbilical cord blood was collected from 90 newborns and the concentration of insulin, leptin, and C-peptide were measured by ELISA technique. Birth weight, length, occipital frontal, chest, hip and calf circumferences of newborns were measured and characteristics of the mother such as age, height, weight before pregnancy and weight gain were collected. The relationship between insulin, leptin, C-peptide, and anthropometrics were assessed by Pearson’s correlation while the Mann-Whitney U test was used to assess the differences in cord blood leptin, C-peptide, and insulin levels between groups. A significant difference (p < 0.001) was observed between the insulin levels of infants born LGA (18.73 ± 0.64 µlU/ml) and AGA (13.08 ± 0.43 µlU/ml). Consistently, A significant increase in concentration (p < 0.001) was observed in C-peptide levels of infants born LGA (9.32 ± 0.77 ng/ml) compared to AGA (5.44 ± 0.19 ng/ml). Cord blood leptin concentration of LGA infants (12.67 ng/mL ± 1.62) was significantly higher (p < 0.001) compared to the AGA infants (7.10 ng/mL ± 0.97). Significant positive correlations (p < 0.05) were observed among cord leptin levels and the birth weight, pre-pregnancy maternal weight and BMI between the infants of AGA and LGA. Consistently, a significant positive correlation (p < 0.05) was observed between the birth weight and the C peptide concentration. Significantly high concentrations of leptin, C-peptide and insulin levels in the cord blood of LGA infants suggest that they may be involved in regulating fetal growth. Although previous studies suggest comparatively high levels of body fat in the Asian population, values obtained in this study are not significantly different from values previously reported from Caucasian populations. According to this preliminary study, maternal pre-pregnancy BMI and weight may contribute as significant indicators of cord blood parameters of insulin resistance and possibly the birth weight of the newborn.Keywords: large for gestational age, leptin, C-peptide, insulin
Procedia PDF Downloads 157611 Phage Display-Derived Vaccine Candidates for Control of Bovine Anaplasmosis
Authors: Itzel Amaro-Estrada, Eduardo Vergara-Rivera, Virginia Juarez-Flores, Mayra Cobaxin-Cardenas, Rosa Estela Quiroz, Jesus F. Preciado, Sergio Rodriguez-Camarillo
Abstract:
Bovine anaplasmosis is an infectious, tick-borne disease caused mainly by Anaplasma marginale; typical signs include anemia, fever, abortion, weight loss, decreased milk production, jaundice, and potentially death. Sick bovine can recover when antibiotics are administered; however, it usually remains as carrier for life, being a risk of infection for susceptible cattle. Anaplasma marginale is an obligate intracellular Gram-negative bacterium with genetic composition highly diverse among geographical isolates. There are currently no vaccines fully effective against bovine anaplasmosis; therefore, the economic losses due to disease are present. Vaccine formulation became a hard task for several pathogens as Anaplasma marginale, but peptide-based vaccines are an interesting proposal way to induce specific responses. Phage-displayed peptide libraries have been proved one of the most powerful technologies for identifying specific ligands. Screening of these peptides libraries is also a tool for studying interactions between proteins or peptides. Thus, it has allowed the identification of ligands recognized by polyclonal antiserums, and it has been successful for the identification of relevant epitopes in chronic diseases and toxicological conditions. Protective immune response to bovine anaplasmosis includes high levels of immunoglobulins subclass G2 (IgG2) but not subclass IgG1. Therefore, IgG2 from the serum of protected bovine can be useful to identify ligands, which can be part of an immunogen for cattle. In this work, phage display random peptide library Ph.D. ™ -12 was incubating with IgG2 or blood sera of immunized bovines against A. marginale as targets. After three rounds of biopanning, several candidates were selected for additional analysis. Subsequently, their reactivity with sera immunized against A. marginale, as well as with positive and negative sera to A. marginale was evaluated by immunoassays. A collection of recognized peptides tested by ELISA was generated. More than three hundred phage-peptides were separately evaluated against molecules which were used during panning. At least ten different peptides sequences were determined from their nucleotide composition. In this approach, three phage-peptides were selected by their binding and affinity properties. In the case of the development of vaccines or diagnostic reagents, it is important to evaluate the immunogenic and antigenic properties of the peptides. Immunogenic in vitro and in vivo behavior of peptides will be assayed as synthetic and as phage-peptide for to determinate their vaccine potential. Acknowledgment: This work was supported by grant SEP-CONACYT 252577 given to I. Amaro-Estrada.Keywords: bovine anaplasmosis, peptides, phage display, veterinary vaccines
Procedia PDF Downloads 141610 Highly Selective Phosgene Free Synthesis of Methylphenylcarbamate from Aniline and Dimethyl Carbonate over Heterogeneous Catalyst
Authors: Nayana T. Nivangune, Vivek V. Ranade, Ashutosh A. Kelkar
Abstract:
Organic carbamates are versatile compounds widely employed as pesticides, fungicides, herbicides, dyes, pharmaceuticals, cosmetics and in the synthesis of polyurethanes. Carbamates can be easily transformed into isocyanates by thermal cracking. Isocyantes are used as precursors for manufacturing agrochemicals, adhesives and polyurethane elastomers. Manufacture of polyurethane foams is a major application of aromatic ioscyanates and in 2007 the global consumption of polyurethane was about 12 million metric tons/year and the average annual growth rate was about 5%. Presently Isocyanates/carbamates are manufactured by phosgene based process. However, because of high toxicity of phoegene and formation of waste products in large quantity; there is a need to develop alternative and safer process for the synthesis of isocyanates/carbamates. Recently many alternative processes have been investigated and carbamate synthesis by methoxycarbonylation of aromatic amines using dimethyl carbonate (DMC) as a green reagent has emerged as promising alternative route. In this reaction methanol is formed as a by-product, which can be converted to DMC either by oxidative carbonylation of methanol or by reacting with urea. Thus, the route based on DMC has a potential to provide atom efficient and safer route for the synthesis of carbamates from DMC and amines. Lot of work is being carried out on the development of catalysts for this reaction and homogeneous zinc salts were found to be good catalysts for the reaction. However, catalyst/product separation is challenging with these catalysts. There are few reports on the use of supported Zn catalysts; however, deactivation of the catalyst is the major problem with these catalysts. We wish to report here methoxycarbonylation of aniline to methylphenylcarbamate (MPC) using amino acid complexes of Zn as highly active and selective catalysts. The catalysts were characterized by XRD, IR, solid state NMR and XPS analysis. Methoxycarbonylation of aniline was carried out at 170 °C using 2.5 wt% of the catalyst to achieve >98% conversion of aniline with 97-99% selectivity to MPC as the product. Formation of N-methylated products in small quantity (1-2%) was also observed. Optimization of the reaction conditions was carried out using zinc-proline complex as the catalyst. Selectivity was strongly dependent on the temperature and aniline:DMC ratio used. At lower aniline:DMC ratio and at higher temperature, selectivity to MPC decreased (85-89% respectively) with the formation of N-methylaniline (NMA), N-methyl methylphenylcarbamate (MMPC) and N,N-dimethyl aniline (NNDMA) as by-products. Best results (98% aniline conversion with 99% selectivity to MPC in 4 h) were observed at 170oC and aniline:DMC ratio of 1:20. Catalyst stability was verified by carrying out recycle experiment. Methoxycarbonylation preceded smoothly with various amine derivatives indicating versatility of the catalyst. The catalyst is inexpensive and can be easily prepared from zinc salt and naturally occurring amino acids. The results are important and provide environmentally benign route for MPC synthesis with high activity and selectivity.Keywords: aniline, heterogeneous catalyst, methoxycarbonylation, methylphenyl carbamate
Procedia PDF Downloads 274609 Conservation Detection Dogs to Protect Europe's Native Biodiversity from Invasive Species
Authors: Helga Heylen
Abstract:
With dogs saving wildlife in New Zealand since 1890 and governments in Africa, Australia and Canada trusting them to give the best results, Conservation Dogs Ireland want to introduce more detection dogs to protect Europe's native wildlife. Conservation detection dogs are fast, portable and endlessly trainable. They are a cost-effective, highly sensitive and non-invasive way to detect protected and invasive species and wildlife disease. Conservation dogs find targets up to 40 times faster than any other method. They give results instantly, with near-perfect accuracy. They can search for multiple targets simultaneously, with no reduction in efficacy The European Red List indicates the decline in biodiversity has been most rapid in the past 50 years, and the risk of extinction never higher. Just two examples of major threats dogs are trained to tackle are: (I)Japanese Knotweed (Fallopia Japonica), not only a serious threat to ecosystems, crops, structures like bridges and roads - it can wipe out the entire value of a house. The property industry and homeowners are only just waking up to the full extent of the nightmare. When those working in construction on the roads move topsoil with a trace of Japanese Knotweed, it suffices to start a new colony. Japanese Knotweed grows up to 7cm a day. It can stay dormant and resprout after 20 years. In the UK, the cost of removing Japanese Knotweed from the London Olympic site in 2012 was around £70m (€83m). UK banks already no longer lend on a house that has Japanese Knotweed on-site. Legally, landowners are now obliged to excavate Japanese Knotweed and have it removed to a landfill. More and more, we see Japanese Knotweed grow where a new house has been constructed, and topsoil has been brought in. Conservation dogs are trained to detect small fragments of any part of the plant on sites and in topsoil. (II)Zebra mussels (Dreissena Polymorpha) are a threat to many waterways in the world. They colonize rivers, canals, docks, lakes, reservoirs, water pipes and cooling systems. They live up to 3 years and will release up to one million eggs each year. Zebra mussels attach to surfaces like rocks, anchors, boat hulls, intake pipes and boat engines. They cause changes in nutrient cycles, reduction of plankton and increased plant growth around lake edges, leading to the decline of Europe's native mussel and fish populations. There is no solution, only costly measures to keep it at bay. With many interconnected networks of waterways, they have spread uncontrollably. Conservation detection dogs detect the Zebra mussel from its early larvae stage, which is still invisible to the human eye. Detection dogs are more thorough and cost-effective than any other conservation method, and will greatly complement and speed up the work of biologists, surveyors, developers, ecologists and researchers.Keywords: native biodiversity, conservation detection dogs, invasive species, Japanese Knotweed, zebra mussel
Procedia PDF Downloads 196608 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer
Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu
Abstract:
Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer
Procedia PDF Downloads 60607 Effect of Timing and Contributing Factors for Early Language Intervention in Toddlers with Repaired Cleft Lip and Palate
Authors: Pushpavathi M., Kavya V., Akshatha V.
Abstract:
Introduction: Cleft lip and palate (CLP) is a congenital condition which hinders effectual communication due to associated speech and language difficulties. Expressive language delay (ELD) is a feature seen in this population which is influenced by factors such as type and severity of CLP, age at surgical and linguistic intervention and also the type and intensity of speech and language therapy (SLT). Since CLP is the most common congenital abnormality seen in Indian children, early intervention is a necessity which plays a critical role in enhancing their speech and language skills. The interaction between the timing of intervention and factors which contribute to effective intervention by caregivers is an area which needs to be explored. Objectives: The present study attempts to determine the effect of timing of intervention on the contributing maternal factors for effective linguistic intervention in toddlers with repaired CLP with respect to the awareness, home training patterns, speech and non-speech behaviors of the mothers. Participants: Thirty six toddlers in the age range of 1 to 4 years diagnosed as ELD secondary to repaired CLP, along with their mothers served as participants. Group I (Early Intervention Group, EIG) included 19 mother-child pairs who came to seek SLT soon after corrective surgery and group II (Delayed Intervention Group, DIG) included 16 mother-child pairs who received SLT after the age of 3 years. Further, the groups were divided into group A, and group B. Group ‘A’ received SLT for 60 sessions by Speech Language Pathologist (SLP), while Group B received SLT for 30 sessions by SLP and 30 sessions only by mother without supervision of SLP. Method: The mothers were enrolled for the Early Language Intervention Program and following this, their awareness about CLP was assessed through the Parental awareness questionnaire. The quality of home training was assessed through Mohite’s Inventory. Subsequently, the speech and non-speech behaviors of the mothers were assessed using a Mother’s behavioral checklist. Detailed counseling and orientation was done to the mothers, and SLT was initiated for toddlers. After 60 sessions of intensive SLT, the questionnaire and checklists were re-administered to find out the changes in scores between the pre- and posttest measurements. Results: The scores obtained under different domains in the awareness questionnaire, Mohite’s inventory and Mothers behavior checklist were tabulated and subjected to statistical analysis. Since the data did not follow normal distribution (i.e. p > 0.05), Mann-Whitney U test was conducted which revealed that there was no significant difference between groups I and II as well as groups A and B. Further, Wilcoxon Signed Rank test revealed that mothers had better awareness regarding issues related to CLP and improved home-training abilities post-orientation (p ≤ 0.05). A statistically significant difference was also noted for speech and non-speech behaviors of the mothers (p ≤ 0.05). Conclusions: Extensive orientation and counseling helped mothers of both EI and DI groups to improve their knowledge about CLP. Intensive SLT using focused stimulation and a parent-implemented approach enabled them to carry out the intervention in an effectual manner.Keywords: awareness, cleft lip and palate, early language intervention program, home training, orientation, timing of intervention
Procedia PDF Downloads 122606 Characterization of Extra Virgin Olive Oil from Olive Cultivars Grown in Pothwar, Pakistan
Authors: Abida Mariam, Anwaar Ahmed, Asif Ahmad, Muhammad Sheeraz Ahmad, Muhammad Akram Khan, Muhammad Mazahir
Abstract:
The plant olive (Olea europaea L.) is known for its commercial significance due to nutritional and health benefits. Pakistan is ranked 4th among countries who import olive oil whereas, 70% of edible oil is imported to fulfil the needs of the country. There exists great potential for Olea europaea cultivation in Pakistan. The popularity and cultivation of olive fruit has increased in recent past due to its high socio-economic and health significance. There exist almost negligible data on the chemical composition of extra virgin olive oil extracted from cultivars grown in Pothwar, an area with arid climate conducive for growth of olive trees. Keeping in view these factors a study has been conducted to characterize the olive oil extracted from olive cultivars collected from Pothwar regions of Pakistan for their nutritional potential and value addition. Ten olive cultivars (Gemlik, Coratina, Sevillano, Manzanilla, Leccino, Koroneiki, Frantoio, Arbiquina, Earlik and Ottobratica) were collected from Barani Agriculture Research Institute, Chakwal. Extra Virgin Olive Oil (EVOO) was extracted by cold pressing and centrifuging of olive fruits. The highest amount of oil was yielded in Coratina (23.9%) followed by Frantoio (23.7%), Koroneiki (22.8%), Sevillano (22%), Ottobratica (22%), Leccino (20.5%), Arbiquina (19.2%), Manzanilla (17.2%), Earlik (14.4%) and Gemllik (13.1%). The extracted virgin olive oil was studied for various physico- chemical properties and fatty acid profile. The Physical and chemical properties i.e., characteristic odor and taste, light yellow color with no foreign matter, insoluble impurities (≤0.08), fee fatty acid (0.1 to 0.8), acidity (0.5 to 1.6 mg/g acid), peroxide value (1.5 to 5.2 meqO2/kg), Iodine value (82 to 90), saponification value (186 to 192 mg/g) and unsaponifiable matter (4 to 8g/kg), ultraviolet spectrophotometric analysis (k232 and k270), showed values in the acceptable range, established by PSQCA and IOOC set for extra virgin olive oil. Olive oil was analyzed by Near Infra-Red spectrophotometry (NIR) for fatty acids sin olive oils which were found as: palmitic, palmitoleic, stearic, oleic, linoleic and alpha-linolenic. Major fatty acid was Oleic acid in the highest percentage ranging from (55 to 66.1%), followed by linoleic (10.4 to 20.4%), palmitic (13.8 to 19.5%), stearic (3.9 to 4.4%), palmitoleic (0.3 to 1.7%) and alpha-linolenic (0.9 to 1.7%). The results were significant with differences in parameters analyzed for all ten cultivars which confirm that genetic factors are important contributors in the physico-chemical characteristics of oil. The olive oil showed superior physical and chemical properties and recommended as one of the healthiest forms of edible oil. This study will help consumers to be more aware of and make better choices of healthy oils available locally thus contributing towards their better health.Keywords: characterization, extra virgin olive oil, oil yield, fatty acids
Procedia PDF Downloads 97605 Developing Geriatric Oral Health Network is a Public Health Necessity for Older Adults
Authors: Maryam Tabrizi, Shahrzad Aarup
Abstract:
Objectives- Understanding the close association between oral health and overall health for older adults at the right time and right place, a person, focus treatment through Project ECHO telementoring. Methodology- Data from monthly ECHO telementoring sessions were provided for three years. Sessions including case presentations, overall health conditions, considering medications, organ functions limitations, including the level of cognition. Contributions- Providing the specialist level of providing care to all elderly regardless of their location and other health conditions and decreasing oral health inequity by increasing workforce via Project ECHO telementoring program worldwide. By 2030, the number of adults in the USA over the age of 65 will increase more than 60% (approx.46 million) and over 22 million (30%) of 74 million older Americans will need specialized geriatrician care. In 2025, a national shortage of medical geriatricians will be close to 27,000. Most individuals 65 and older do not receive oral health care due to lack of access, availability, or affordability. One of the main reasons is a significant shortage of Oral Health (OH) education and resources for the elderly, particularly in rural areas. Poor OH is a social stigma, a thread to quality and safety of overall health of the elderly with physical and cognitive decline. Poor OH conditions may be costly and sometimes life-threatening. Non-traumatic dental-related emergency department use in Texas alone was over $250 M in 2016. Most elderly over the age of 65 present with at least one or multiple chronic diseases such as arthritis, diabetes, heart diseases, and chronic obstructive pulmonary disease (COPD) are at higher risk to develop gum (periodontal) disease, yet they are less likely to get dental care. In addition, most older adults take both prescription and over-the-counter drugs; according to scientific studies, many of these medications cause dry mouth. Reduced saliva flow due to aging and medications may increase the risk of cavities and other oral conditions. Most dental schools have already increased geriatrics OH in their educational curriculums, but the aging population growth worldwide is faster than growing geriatrics dentists. However, without the use of advanced technology and creating a network between specialists and primary care providers, it is impossible to increase the workforce, provide equitable oral health to the elderly. Project ECHO is a guided practice model that revolutionizes health education and increases the workforce to provide best-practice specialty care and reduce health disparities. Training oral health providers for utilizing the Project ECHO model is a logical response to the shortage and increases oral health access to the elderly. Project ECHO trains general dentists & hygienists to provide specialty care services. This means more elderly can get the care they need, in the right place, at the right time, with better treatment outcomes and reduces costs.Keywords: geriatric, oral health, project echo, chronic disease, oral health
Procedia PDF Downloads 174604 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study
Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi
Abstract:
Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine
Procedia PDF Downloads 115603 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183602 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group
Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb
Abstract:
Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.Keywords: broadband services, customer experience quality, loyalty, net promoters score
Procedia PDF Downloads 266601 Effect of Juvenile Hormone on Respiratory Metabolism during Non-Diapausing Sesamia cretica Wandering Larvae (Lepidoptera: Noctuidae)
Authors: E. A. Abdel-Hakim
Abstract:
The corn stemborer Sesamia cretica (Lederer), has been viewed in many parts of the world as a major pest of cultivated maize, graminaceous crops and sugarcane. Its life cycle is comprised of two different phases, one is the growth and developmental phase (non-diapause) and the other is diapause phase which takes place at the last larval instar. Several problems associated with the use of conventional insecticides, have strongly demonstrated the need for applying alternative safe compounds. Prominent among the prototypes of such prospective chemicals are the juvenoids; i.e. the insect (JH) mimics. In fact, the hormonal effect on metabolism has long been viewed as a secondary consequence of its direct action on specific energy-requiring biosynthetic mechanisms. Therefore, the present study was undertaken essentially in a rather systematic fashion as a contribution towards clarifying metabolic and energetic changes taking place during non-diapause wandering larvae as regulated by (JH) mimic. For this purpose, we applied two different doses of JH mimic (Ro 11-0111) in a single (standard) dose of 100µg or in a single dose of 20 µg/g bw in1µl acetone topically at the onset of nondiapause wandering larvae (WL). Energetic data were obtained by indirect calorimetry methods by conversion of respiratory gas exchange volumetric data, as measured manometrically using a Warburg constant respirometer, to caloric units (g-cal/g fw/h). The findings obtained can be given in brief; these treated larvae underwent supernumerary larval moults. However, this potential the wandering larvae proved to possess whereby restoration of larval programming for S. cretica to overcome stresses even at this critical developmental period. The results obtained, particularly with the high dose used, show that 98% wandering larvae were rescued to survive up to one month (vs. 5 days for normal controls), finally the formation of larval-adult intermediates. Also, the solvent controls had resulted in about 22% additional, but stationary moultings. The basal respiratory metabolism (O2 uptake and CO2 output) of the (WL), whether un-treated or larvae not had followed reciprocal U-shaped curves all along of their developmental duration. The lowest points stood nearly to the day of prepupal formation (571±187 µl O2/gfw/h and 553±181 µl CO2/gfw/h) during un-treated in contrast to the larvae treated with JH (210±48 µl O2/gfw/h and 335±81 µl CO2/gfw/h). Un-treated (normal) larvae proved to utilize carbohydrates as the principal source for energy supply; being fully oxidised without sparing any appreciable amount for endergonic conversion to fats. While, the juvenoid-treated larvae and compared with the acetone-treated control equivalents, there existed no distinguishable differences between them; both had been observed utilising carbohydrates as the sole source of energy demand and converting endergonically almost similar percentages to fats. The overall profile, treated and un-treated (WL) utilized carbohydrates as the principal source for energy demand during this stage.Keywords: juvenile hormone, respiratory metabolism, Sesamia cretica, wandering phase
Procedia PDF Downloads 293600 Howard Mold Count of Tomato Pulp Commercialized in the State of São Paulo, Brazil
Authors: M. B. Atui, A. M. Silva, M. A. M. Marciano, M. I. Fioravanti, V. A. Franco, L. B. Chasin, A. R. Ferreira, M. D. Nogueira
Abstract:
Fungi attack large amount of fruits and those who have suffered an injury on the surface are more susceptible to the growth, as they have pectinolytic enzymes that destroy the edible portion forming an amorphous and soft dough. The spores can reach the plant by the wind, rain and insects and fruit may have on its surface, besides the contaminants from the fruit trees, land and water, forming a flora composed mainly of yeasts and molds. Other contamination can occur for the equipment used to harvest, for the use of boxes and contaminated water to the fruit washing, for storage in dirty places. The hyphae in tomato products indicate the use of raw materials contaminated or unsuitable hygiene conditions during processing. Although fungi are inactivated in heat processing step, its hyphae remain in the final product and search for detection and quantification is an indicator of the quality of raw material. Howard Method count of fungi mycelia in industrialized pulps evaluates the amount of decayed fruits existing in raw material. The Brazilian legislation governing processed and packaged products set the limit of 40% of positive fields in tomato pulps. The aim of this study was to evaluate the quality of the tomato pulp sold in greater São Paulo, through a monitoring during the four seasons of the year. All over 2010, 110 samples have been examined; 21 were taking in spring, 31 in summer, 31 in fall and 27 in winter, all from different lots and trademarks. Samples have been picked up in several stores located in the city of São Paulo. Howard method was used, recommended by the AOAC, 19th ed, 2011 16:19:02 technique - method 965.41. Hundred percent of the samples contained fungi mycelia. The count average of fungi mycelia per season was 23%, 28%, 8,2% and 9,9% in spring, summer, fall and winter, respectively. Regarding the spring samples of the 21 samples analyzed, 14.3% were off-limits proposed by the legislation. As for the samples of the fall and winter, all were in accordance with the legislation and the average of mycelial filament count has not exceeded 20%, which can be explained by the low temperatures during this time of the year. The acquired samples in the summer and spring showed high percentage of fungal mycelium in the final product, related to the high temperatures in these seasons. Considering that the limit of 40% of positive fields is accepted for the Brazilian Legislation (RDC nº 14/2014), 3 spring samples (14%) and 6 summer samples (19%) will be over this limit and subject to law penalties. According to gathered data, 82% of manufacturers of this product manage to keep acceptable levels of fungi mycelia in their product. In conclusion, only 9.2% samples were for the limits established by Resolution RDC. 14/2014, showing that the limit of 40% is feasible and can be used by these segment industries. The result of the filament count mycelial by Howard method is an important tool in the microscopic analysis since it measures the quality of raw material used in the production of tomato products.Keywords: fungi, howard, method, tomato, pulps
Procedia PDF Downloads 374599 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission
Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires
Abstract:
Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.Keywords: greenhouse gases, human development, mitigation, intensive energy activities
Procedia PDF Downloads 320