Search results for: self stability concept
1028 Predicting Success and Failure in Drug Development Using Text Analysis
Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev
Abstract:
Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.Keywords: data analysis, drug development, sentiment analysis, text-mining
Procedia PDF Downloads 1561027 An Optimal Hybrid EMS System for a Hyperloop Prototype Vehicle
Authors: J. F. Gonzalez-Rojo, Federico Lluesma-Rodriguez, Temoatzin Gonzalez
Abstract:
Hyperloop, a new mode of transport, is gaining significance. It consists of the use of a ground-based transport system which includes a levitation system, that avoids rolling friction forces, and which has been covered with a tube, controlling the inner atmosphere lowering the aerodynamic drag forces. Thus, hyperloop is proposed as a solution to the current limitation on ground transportation. Rolling and aerodynamic problems, that limit large speeds for traditional high-speed rail or even maglev systems, are overcome using a hyperloop solution. Zeleros is one of the companies developing technology for hyperloop application worldwide. It is working on a concept that reduces the infrastructure cost and minimizes the power consumption as well as the losses associated with magnetic drag forces. For this purpose, Zeleros proposes a Hybrid ElectroMagnetic Suspension (EMS) for its prototype. In the present manuscript an active and optimal electromagnetic suspension levitation method based on nearly zero power consumption individual modules is presented. This system consists of several hybrid permanent magnet-coil levitation units that can be arranged along the vehicle. The proposed unit manages to redirect the magnetic field along a defined direction forming a magnetic circuit and minimizing the loses due to field dispersion. This is achieved using an electrical steel core. Each module can stabilize the gap distance using the coil current and either linear or non-linear control methods. The ratio between weight and levitation force for each unit is 1/10. In addition, the quotient between the lifted weight and power consumption at the target gap distance is 1/3 [kg/W]. One degree of freedom (DoF) (along the gap direction) is controlled by a single unit. However, when several units are present, a 5 DoF control (2 translational and 3 rotational) can be achieved, leading to the full attitude control of the vehicle. The proposed system has been successfully tested reaching TRL-4 in a laboratory test bench and is currently in TRL-5 state development if the module association in order to control 5 DoF is considered.Keywords: active optimal control, electromagnetic levitation, HEMS, high-speed transport, hyperloop
Procedia PDF Downloads 1441026 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection
Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng
Abstract:
Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric
Procedia PDF Downloads 4451025 Psychological Consultation of Married Couples at Various Stages of Formation of the Young Family
Authors: Gulden Aykinbaeva, Assem Umirzakova, Assel Makhadiyeva
Abstract:
The problem of studying of young married couples in connection with a change of social institute of a family and marriage is represented very actual for family consultation, considering a family role in the development of modern society. Results of numerous researchs say that one of difficult in formation and stabilization of a matrimony is the period of a young family. This period is characterized by various processes of integration, adaptation and emotional compatibility of spouses. The young family in it the period endures the first standard crisis which postpones a print for the further development of the family scenario. Emergence new, earlier not existing, systems of values render a huge value on the process of formation of a young family and each of spouses separately. Possibly to solve the set family tasks at the development of the uniform system of the family relations in which socially mature persons capable to consider a family as the creativity of each other act as subjects. Due to the research objective in work the following techniques were used: a questionnaire of satisfaction with V. V. Stolin's marriage and A. N. Volkova's technique directed on detection of coherence of family values and role installations in a married couple, and also content – the analysis. Development of an internal basis of a family on mutual clearing of values is important during the work with married couples. 'The mature view' of the partner in the marriage union provides coherence between the expected and real behavior of the partner that is important for the realization of the purposes of adaptation in a family. For research of communication of the data obtained by means of A. N. Volkova's techniques, V. V. Stolina and content – the analysis, the correlation analysis, with the application of the criterion of Spirmen was used. The analysis of results of the conducted research allowed us to determine the number of consistent patterns: 1. Nature of change of satisfaction with marriage at spouses testifies that the matrimonial relations undergo high-quality changes at different stages of formation of a young family. 2. The matrimonial relations in the course of their development, formation and functioning in young marriage undergo considerable changes on psychological, social and psychological and insignificant — at the psychophysiological and sociocultural levels. The material received by us allows to plan ways of further detailed researches of the development of the matrimonial relations not only in the young marriage but also at further stages of development of a matrimony. We believe that the results received in this research can be almost applied at creation of algorithms of selection of marriage partners, at diagnostics of character and the maintenance of matrimonial disharmonies, at the forecast of stability of marriage and a family.Keywords: married couples, formation of the young family, psychological consultation, matrimony
Procedia PDF Downloads 3941024 Towards Sustainable African Urban Design Concepts
Authors: Gerald Steyn
Abstract:
Sub-Saharan Africa is the world's fastest urbanizing region, but approximately 60 to 70 percent of urban African households are poor and living in slums. Although influential global institutions such as the World Bank propagate a new approach to housing and land policies, sustainable African urban concepts have yet to be applied significantly or even convincingly conceptualized. Most African city planners, urban designers, architects, policymakers, and developers have been trained in Western curriculums and continue to practice and plan according to such formal paradigms. Only a few activists promote Post-Colonial Afrocentric urbanism, recognizing the imperative of foregrounding the needs of low-income people. There is a vast body of authoritative literature on analyzing poverty and slums in sub-Saharan Africa and on promoting the need for land and city planning reform. However, of the latter, only a few venture beyond advising and sometimes outlining policy changes. The current study moves beyond a purely theoretical discourse into the realm of practice by designing replicable diagrammatic concepts at different urban scales. The guiding philosophy was that land-use concepts and urban requirements favoring low-income households must be fully integrated into the larger conurbation. Information was derived from intensive research over two decades, involving literature surveys and observations during regular travels into East and Southern Africa. Appropriate existing urban patterns, particularly vernacular and informal, were subsequently analyzed and reimagined as precedents to inform and underpin the represented design concepts. Five interrelated concepts are proposed, ranging in scale from (1) regional to (2) cities and (3) urban villages to (4) neighborhoods and (5) streets. Each concept is described, first in terms of its context and associated issues of concern, followed by a discussion of the patterns available to inform a possible solution, and finally, an explanation and graphic illustration of the proposal. Since each of the five concepts is unfolded from existing informal and vernacular practices studied in situ, the approach is entirely bottom-up. Contrary to an idealized vision of the African city, this study proposes actual concepts for critical assessment by peers in the tradition of architectural research in design.Keywords: african urban concepts, post-colonial afrocentric urbanism, sub-saharan africa, sustainable african urban design
Procedia PDF Downloads 481023 Integration of a Microbial Electrolysis Cell and an Oxy-Combustion Boiler
Authors: Ruth Diego, Luis M. Romeo, Antonio Morán
Abstract:
In the present work, a study of the coupling of a Bioelectrochemical System together with an oxy-combustion boiler is carried out; specifically, it proposes to connect the combustion gas outlet of a boiler with a microbial electrolysis cell (MEC) where the CO2 from the gases are transformed into methane in the cathode chamber, and the oxygen produced in the anode chamber is recirculated to the oxy-combustion boiler. The MEC mainly consists of two electrodes (anode and cathode) immersed in an aqueous electrolyte; these electrodes are separated by a proton exchange membrane (PEM). In this case, the anode is abiotic (where oxygen is produced), and it is at the cathode that an electroactive biofilm is formed with microorganisms that catalyze the CO2 reduction reactions. Real data from an oxy-combustion process in a boiler of around 20 thermal MW have been used for this study and are combined with data obtained on a smaller scale (laboratory-pilot scale) to determine the yields that could be obtained considering the system as environmentally sustainable energy storage. In this way, an attempt is made to integrate a relatively conventional energy production system (oxy-combustion) with a biological system (microbial electrolysis cell), which is a challenge to be addressed in this type of new hybrid scheme. In this way, a novel concept is presented with the basic dimensioning of the necessary equipment and the efficiency of the global process. In this work, it has been calculated that the efficiency of this power-to-gas system based on MEC cells when coupled to industrial processes is of the same order of magnitude as the most promising equivalent routes. The proposed process has two main limitations, the overpotentials in the electrodes that penalize the overall efficiency and the need for storage tanks for the process gases. The results of the calculations carried out in this work show that certain real potentials achieve an acceptable performance. Regarding the tanks, with adequate dimensioning, it is possible to achieve complete autonomy. The proposed system called OxyMES provides energy storage without energetically penalizing the process when compared to an oxy-combustion plant with conventional CO2 capture. According to the results obtained, this system can be applied as a measure to decarbonize an industry, changing the original fuel of the oxy-combustion boiler to the biogas generated in the MEC cell. It could also be used to neutralize CO2 emissions from industry by converting it to methane and then injecting it into the natural gas grid.Keywords: microbial electrolysis cells, oxy-combustion, co2, power-to-gas
Procedia PDF Downloads 1051022 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients
Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff
Abstract:
Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)
Procedia PDF Downloads 3531021 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape
Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin
Abstract:
It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.Keywords: daily light integral, plant design, urban open space
Procedia PDF Downloads 5081020 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 1331019 Cost-Effectiveness of a Certified Service or Hearing Dog Compared to a Regular Companion Dog
Authors: Lundqvist M., Alwin J., Levin L-A.
Abstract:
Background: Assistance dogs are dogs trained to assist persons with functional impairment or chronic diseases. The assistance dog concept includes different types: guide dogs, hearing dogs, and service dogs. The service dog can further be divided into subgroups of physical services dogs, diabetes alert dogs, and seizure alert dogs. To examine the long-term effects of health care interventions, both in terms of resource use and health outcomes, cost-effectiveness analyses can be conducted. This analysis can provide important input to decision-makers when setting priorities. Little is known when it comes to the cost-effectiveness of assistance dogs. The study aimed to assess the cost-effectiveness of certified service or hearing dogs in comparison to regular companion dogs. Methods: The main data source for the analysis was the “service and hearing dog project”. It was a longitudinal interventional study with a pre-post design that incorporated fifty-five owners and their dogs. Data on all relevant costs affected by the use of a service dog such as; municipal services, health care costs, costs of sick leave, and costs of informal care were collected. Health-related quality of life was measured with the standardized instrument EQ-5D-3L. A decision-analytic Markov model was constructed to conduct the cost-effectiveness analysis. Outcomes were estimated over a 10-year time horizon. The incremental cost-effectiveness ratio expressed as cost per gained quality-adjusted life year was the primary outcome. The analysis employed a societal perspective. Results: The result of the cost-effectiveness analysis showed that compared to a regular companion dog, a certified dog is cost-effective with both lower total costs [-32,000 USD] and more quality-adjusted life-years [0.17]. Also, we will present subgroup results analyzing the cost-effectiveness of physicals service dogs and diabetes alert dogs. Conclusions: The study shows that a certified dog is cost-effective in comparison with a regular companion dog for individuals with functional impairments or chronic diseases. Analyses of uncertainty imply that further studies are needed.Keywords: service dogs, hearing dogs, health economics, Markov model, quality-adjusted, life years
Procedia PDF Downloads 1491018 Enhancing Photocatalytic Hydrogen Production: Modification of TiO₂ by Coupling with Semiconductor Nanoparticles
Authors: Saud Hamdan Alshammari
Abstract:
Photocatalytic water splitting to produce hydrogen (H₂) has obtained significant attention as an environmentally friendly technology. This process, which produces hydrogen from water and sunlight, represents a renewable energy source. Titanium dioxide (TiO₂) plays a critical role in photocatalytic hydrogen production due to its chemical stability, availability, and low cost. Nevertheless, TiO₂'s wide band gap (3.2 eV) limits its visible light absorption and might affect the effectiveness of the photocatalytic. Coupling TiO₂ with other semiconductors is a strategy that can enhance TiO₂ by narrowing its band gap and improving visible light absorption. This paper studies the modification of TiO₂ by coupling it with another semiconductor such as CdS nanoparticles using a reflux reactor and autoclave reactor that helps form a core-shell structure. Characterization techniques, including TEM and UV-Vis spectroscopy, confirmed successful coating of TiO₂ on CdS core, reduction of the band gap from 3.28 eV to 3.1 eV, and enhanced light absorption in the visible region. These modifications are attributed to the heterojunction structure between TiO₂ and CdS.The essential goal of this study is to improve TiO₂ for use in photocatalytic water splitting to enhance hydrogen production. The core-shell TiO₂@CdS nanoparticles exhibited promising results, due to band gap narrowing and improved light absorption. Future work will involve adding Pt as a co-catalyst, which is known to increase surface reaction activity by enhancing proton adsorption. Evaluation of the TiO₂@CdS@Pt catalyst will include performance assessments and hydrogen productivity tests, considering factors such as effective shapes and material ratios. Moreover, the study could be enhanced by studying further modifications to the catalyst and displaying additional performance evaluations. For instance, doping TiO₂ with metals such as nickel (Ni), iron (Fe), and cobalt (Co) and non-metals such as nitrogen (N), carbon (C), and sulfur (S) could positively influence the catalyst by reducing the band gap, enhancing the separation of photogenerated electron-hole pairs, and increasing the surface area, respectively. Additionally, to further improve catalytic performance, examining different catalyst morphologies, such as nanorods, nanowires, and nanosheets, in hydrogen production could be highly beneficial. Optimizing photoreactor design for efficient photon delivery and illumination will further enhance the photocatalytic process. These strategies collectively aim to overcome current challenges and improve the efficiency of hydrogen production via photocatalysis.Keywords: hydrogen production, photocatalytic, water spliiting, semiconductor, nanoparticles
Procedia PDF Downloads 191017 A Re-Evaluation of Green Architecture and Its Contributions to Environmental Sustainability
Authors: Po-Ching Wang
Abstract:
Considering the notable effects of natural resource consumption and impacts on fragile ecosystems, reflection on contemporary sustainable design is critical. Nevertheless, the idea of ‘green’ has been misapplied and even abused, and, in fact, much damage to the environment has been done in its name. In 1996’s popular science fiction film Independence Day, an alien species, having exhausted the natural resources of one planet, moves on to another —a fairly obvious irony on contemporary human beings’ irresponsible use of the Earth’s natural resources in modern times. In fact, the human ambition to master nature and freely access the world’s resources has long been inherent in manifestos evinced by productions of the environmental design professions. Ron Herron’s Walking City, an experimental architectural piece of 1964, is one example that comes to mind here. For this design concept, the architect imagined a gigantic nomadic urban aggregate that by way of an insect-like robotic carrier would move all over the world, on land and sea, to wherever its inhabitants want. Given the contemporary crisis regarding natural resources, recently ideas pertinent to structuring a sustainable environment have been attracting much interest in architecture, a field that has been accused of significantly contributing to ecosystem degradation. Great art, such as Fallingwater building, has been regarded as nature-friendly, but its notion of ‘green’ might be inadequate in the face of the resource demands made by human populations today. This research suggests a more conservative and scrupulous attitude to attempting to modify nature for architectural settings. Designs that pursue spiritual or metaphysical interconnections through anthropocentric aesthetics are not sufficient to benefit ecosystem integrity; though high-tech energy-saving processes may contribute to a fine-scale sustainability, they may ultimately cause catastrophe in the global scale. Design with frugality is proposed in order to actively reduce environmental load. The aesthetic taste and ecological sensibility of design professions and the public alike may have to be reshaped in order to make the goals of environmental sustainability viable.Keywords: anthropocentric aesthetic, aquarium sustainability, biosphere 2, ecological aesthetic, ecological footprint, frugal design
Procedia PDF Downloads 2081016 Eco-Design of Multifunctional System Based on a Shape Memory Polymer and ZnO Nanoparticles for Sportswear
Authors: Inês Boticas, Diana P. Ferreira, Ana Eusébio, Carlos Silva, Pedro Magalhães, Ricardo Silva, Raul Fangueiro
Abstract:
Since the beginning of the 20th century, sportswear has a major contribution to the impact of fashion on our lives. Nowadays, the embracing of sportswear fashion/looks is undoubtedly noticeable, as the modern consumer searches for high comfort and linear aesthetics for its clothes. This compromise lead to the arise of the athleisure trend. Athleisure surges as a new style area that combines both wearability and fashion sense, differentiated from the archetypal sportswear, usually associated to “gym clothes”. Additionally, the possibility to functionalize and implement new technologies have shifted and progressively empowers the connection between the concepts of physical activities practice and well-being, allowing clothing to be more interactive and responsive with its surroundings. In this study, a design inspired in retro and urban lifestyle was envisioned, engineering textile structures that can respond to external stimuli. These structures are enhanced to be responsive to heat, water vapor and humidity, integrating shape memory polymers (SMP) to improve the breathability and heat-responsive behavior of the textiles and zinc oxide nanoparticles (ZnO NPs) to heighten the surface hydrophobic properties. The best results for hydrophobic exhibited superhydrophobic behavior with water contact angle (WAC) of more than 150 degrees. For the breathability and heat-response properties, SMP-coated samples showed an increase in water vapour permeability values of about 50% when compared with non SMP-coated samples. These innovative technological approaches were endorsed to design innovative clothing, in line with circular economy and eco-design principles, by assigning a substantial degree of mutability and versatility to the clothing. The development of a coat and shirt, in which different parts can be purchased separately to create multiple products, aims to combine the technicality of both the fabrics used and the making of the garments. This concept translates itself into a real constructive mechanism through the symbiosis of high-tech functionalities and the timeless design that follows the athleisure aesthetics.Keywords: breathability, sportswear and casual clothing, sustainable design, superhydrophobicity
Procedia PDF Downloads 1341015 Characterization of Polymorphic Forms of Rifaximin
Authors: Ana Carolina Kogawa, Selma Gutierrez Antonio, Hérida Regina Nunes Salgado
Abstract:
Rifaximin is an oral antimicrobial, gut - selective and not systemic with adverse effects compared to placebo. It is used for the treatment of hepatic encephalopathy, travelers diarrhea, irritable bowel syndrome, Clostridium difficile, ulcerative colitis and acute diarrhea. The crystalline form present in the rifaximin with minimal systemic absorption is α, being the amorphous form significantly different. Regulators are increasingly attention to polymorphisms. Polymorphs can change the form by altering the drug characteristics compromising the effectiveness and safety of the finished product. International Conference on Harmonization issued the ICH Guidance Q6A, which aim to improve the control of polymorphism in new and existing pharmaceuticals. The objective of this study was to obtain polymorphic forms of rifaximin employing recrystallization processes and characterize them by thermal analysis (thermogravimetry - TG and differential scanning calorimetry - DSC), X-ray diffraction, scanning electron microscopy and solubility test. Six polymorphic forms of rifaximin, designated I to VI were obtained by the crystallization process by evaporation of the solvent. The profiles of the TG curves obtained from polymorphic forms of rifaximin are similar to rifaximin and each other, however, the DTG are different, indicating different thermal behaviors. Melting temperature values of all the polymorphic forms were greater to that shown by the rifaximin, indicating the higher thermal stability of the obtained forms. The comparison of the diffractograms of the polymorphic forms of rifaximin with rifaximin α, β and γ constant in patent indicate that forms III, V and VI are formed by mixing polymorph β and α and form III is formed by polymorph β. The polymorphic form I is formed by polymorph β, but with a significant amount of amorphous material. Already, the polymorphic form II consists of polymorph γ, amorphous. In scanning electron microscope is possible to observe the heterogeneity of morphological characteristics of crystals of polymorphic forms among themselves and with rifaximin. The solubility of forms I and II was greater than the solubility of rifaximin, already, forms III, IV and V presented lower solubility than of rifaximin. Similarly, the bioavailability of the amorphous form of rifaximin is considered significantly higher than the form α, the polymorphic forms obtained in this work can not guarantee the excellent tolerability of the reference medicine. Therefore, studies like these are extremely important and they point to the need for greater requirements by the regulatory agencies competent about polymorphs analysis of the raw materials used in the manufacture of medicines marketed globally. These analyzes are not required in the majority of official compendia. Partnerships between industries, research centers and universities would be a viable way to consolidate researches in this area and contribute to improving the quality of solid drugs.Keywords: electronic microscopy, polymorphism, rifaximin, solubility, X-ray diffraction
Procedia PDF Downloads 6621014 Braille Code Matrix
Authors: Mohammed E. A. Brixi Nigassa, Nassima Labdelli, Ahmed Slami, Arnaud Pothier, Sofiane Soulimane
Abstract:
According to the world health organization (WHO), there are almost 285 million people with visual disability, 39 million of these people are blind. Nevertheless, there is a code for these people that make their life easier and allow them to access information more easily; this code is the Braille code. There are several commercial devices allowing braille reading, unfortunately, most of these devices are not ergonomic and too expensive. Moreover, we know that 90 % of blind people in the world live in low-incomes countries. Our contribution aim is to concept an original microactuator for Braille reading, as well as being ergonomic, inexpensive and lowest possible energy consumption. Nowadays, the piezoelectric device gives the better actuation for low actuation voltage. In this study, we focus on piezoelectric (PZT) material which can bring together all these conditions. Here, we propose to use one matrix composed by six actuators to form the 63 basic combinations of the Braille code that contain letters, numbers, and special characters in compliance with the standards of the braille code. In this work, we use a finite element model with Comsol Multiphysics software for designing and modeling this type of miniature actuator in order to integrate it into a test device. To define the geometry and the design of our actuator, we used physiological limits of perception of human being. Our results demonstrate in our study that piezoelectric actuator could bring a large deflection out-of-plain. Also, we show that microactuators can exhibit non uniform compression. This deformation depends on thin film thickness and the design of membrane arm. The actuator composed of four arms gives the higher deflexion and it always gives a domed deformation at the center of the deviceas in case of the Braille system. The maximal deflection can be estimated around ten micron per Volt (~ 10µm/V). We noticed that the deflection according to the voltage is a linear function, and this deflection not depends only on the voltage the voltage, but also depends on the thickness of the film used and the design of the anchoring arm. Then, we were able to simulate the behavior of the entire matrix and thus display different characters in Braille code. We used these simulations results to achieve our demonstrator. This demonstrator is composed of a layer of PDMS on which we put our piezoelectric material, and then added another layer of PDMS to isolate our actuator. In this contribution, we compare our results to optimize the final demonstrator.Keywords: Braille code, comsol software, microactuators, piezoelectric
Procedia PDF Downloads 3531013 Development of a Process Method to Manufacture Spreads from Powder Hardstock
Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien
Abstract:
It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders
Procedia PDF Downloads 891012 Exploring the Relationship Between Life Experiences and Early Relapse Among Imprisoned Users of Illegal Drugs in Oman: A Focused Ethnography
Authors: Hamida Hamed Said Al Harthi
Abstract:
Background: Illegal drug use is a rising problem that affects Omani youth. This research aimed to study a group of young Omani men who were imprisoned more than once for illegal drug use, focusing on exploring their lifestyle experiences inside and outside the prison and whether these contributed to their early relapse and re-imprisonment. This is the first study of its kind from Oman conducted in a prison setting. Methods: 19 Omani males aged 18–35 years imprisoned in Oman Central Prison were recruited using purposive sampling. Focused ethnography was conducted over 8 months to explore the drug-related experiences outside the prison and during imprisonment. Face-to-face semi-structured interviews with the participants yielded detailed transcripts and field notes. These were thematically analyzed, and the results were compared with the existing literature. Results: The participants’ voices yielded new insights into the lives of young Omani men imprisoned for illegal drug use, including their sufferings and challenges in prison. These included: entry shock, timing and boredom, drug trafficking in prison, as well as physical and psychological health issues. Overall, imprisonment was reported to have negatively impacted the participants’ health, personality, self-concept, emotions, attitudes, behavior and life expectations. The participants reported how their efforts to reintegrate into the Omani community after release from prison were rebuffed due to stigmatization and rejection from society and family. They also experienced frequent unemployment, police surveillance, accommodation problems and a lack of rehabilitation facilities. The immensity of the accumulated psychophysiological trauma contributed to their early relapse and re-imprisonment. Conclusion: This thesis concludes that imprisonment is largely ineffective in controlling drug use in Oman. Urgent action is required across multiple sectors to improve the lives and prospects of users of illegal drugs within and outside the prison to minimize factors contributing to early relapse. Key Words: illegal drugs, drug users, Oman, addiction, Omani culture, prisoners, relapse, re-imprisonment, qualitative research, ethnography.Keywords: illigal drugs, Prison, Omani culture lifestyle, post prison life
Procedia PDF Downloads 791011 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI
Authors: Rutej R. Mehta, Michael A. Chappell
Abstract:
Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.Keywords: arterial spin labelling, dispersion, MRI, perfusion
Procedia PDF Downloads 3681010 Stretchable and Flexible Thermoelectric Polymer Composites for Self-Powered Volatile Organic Compound Vapors Detection
Authors: Petr Slobodian, Pavel Riha, Jiri Matyas, Robert Olejnik, Nuri Karakurt
Abstract:
Thermoelectric devices generate an electrical current when there is a temperature gradient between the hot and cold junctions of two dissimilar conductive materials typically n-type and p-type semiconductors. Consequently, also the polymeric semiconductors composed of polymeric matrix filled by different forms of carbon nanotubes with proper structural hierarchy can have thermoelectric properties which temperature difference transfer into electricity. In spite of lower thermoelectric efficiency of polymeric thermoelectrics in terms of the figure of merit, the properties as stretchability, flexibility, lightweight, low thermal conductivity, easy processing, and low manufacturing cost are advantages in many technological and ecological applications. Polyethylene-octene copolymer based highly elastic composites filled with multi-walled carbon nanotubes (MWCTs) were prepared by sonication of nanotube dispersion in a copolymer solution followed by their precipitation pouring into non-solvent. The electronic properties of MWCNTs were moderated by different treatment techniques such as chemical oxidation, decoration by Ag clusters or addition of low molecular dopants. In this concept, for example, the amounts of oxygenated functional groups attached on MWCNT surface by HNO₃ oxidation increase p-type charge carriers. p-type of charge carriers can be further increased by doping with molecules of triphenylphosphine. For partial altering p-type MWCNTs into less p-type ones, Ag nanoparticles were deposited on MWCNT surface and then doped with 7,7,8,8-tetracyanoquino-dimethane. Both types of MWCNTs with the highest difference in generated thermoelectric power were combined to manufacture polymeric based thermoelectric module generating thermoelectric voltage when the temperature difference is applied between hot and cold ends of the module. Moreover, it was found that the generated voltage by the thermoelectric module at constant temperature gradient was significantly affected when exposed to vapors of different volatile organic compounds representing then a self-powered thermoelectric sensor for chemical vapor detection.Keywords: carbon nanotubes, polymer composites, thermoelectric materials, self-powered gas sensor
Procedia PDF Downloads 1521009 Enhanced Stability of Piezoelectric Crystalline Phase of Poly(Vinylidene Fluoride) (PVDF) and Its Copolymer upon Epitaxial Relationships
Authors: Devi Eka Septiyani Arifin, Jrjeng Ruan
Abstract:
As an approach to manipulate the performance of polymer thin film, epitaxy crystallization within polymer blends of poly(vinylidene fluoride) (PVDF) and its copolymer poly(vinylidene fluoride-trifluoroethylene) P(VDF-TrFE) was studied in this research, which involves the competition between phase separation and crystal growth of constitutive semicrystalline polymers. The unique piezoelectric feature of poly(vinylidene fluoride) crystalline phase is derived from the packing of molecular chains in all-trans conformation, which spatially arranges all the substituted fluorene atoms on one side of the molecular chain and hydrogen atoms on the other side. Therefore, the net dipole moment is induced across the lateral packing of molecular chains. Nevertheless, due to the mutual repulsion among fluorene atoms, this all-trans molecular conformation is not stable, and ready to change above curie temperature, where thermal energy is sufficient to cause segmental rotation. This research attempts to explore whether the epitaxial interactions between piezoelectric crystals and crystal lattice of hexamethylbenzene (HMB) crystalline platelet is able to stabilize this metastable all-trans molecular conformation or not. As an aromatic crystalline compound, the melt of HMB was surprisingly found able to dissolve the poly(vinylidene fluoride), resulting in homogeneous eutectic solution. Thus, after quenching this binary eutectic mixture to room temperature, subsequent heating or annealing processes were designed to explore the involve phase separation and crystallization behavior. The phase transition behaviors were observed in-situ by X-ray diffraction and differential scanning calorimetry (DSC). The molecular packing was observed via transmission electron microscope (TEM) and the principles of electron diffraction were brought to study the internal crystal structure epitaxially developed within thin films. Obtained results clearly indicated the occurrence of heteroepitaxy of PVDF/PVDF-TrFE on HMB crystalline platelet. Both the concentration of poly(vinylidene fluoride) and the mixing ratios of these two constitutive polymers have been adopted as the influential factors for studying the competition between the epitaxial crystallization of PVDF and P(VDF-TrFE) on HMB crystalline. Furthermore, the involved epitaxial relationship is to be deciphered and studied as a potential factor capable of guiding the wide spread of piezoelectric crystalline form.Keywords: epitaxy, crystallization, crystalline platelet, thin film and mixing ratio
Procedia PDF Downloads 2211008 Applications of Artificial Intelligence (AI) in Cardiac imaging
Authors: Angelis P. Barlampas
Abstract:
The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine
Procedia PDF Downloads 771007 Mathematical Modeling on Capturing of Magnetic Nanoparticles in an Implant Assisted Channel for Magnetic Drug Targeting
Authors: Shashi Sharma, V. K. Katiyar, Uaday Singh
Abstract:
The ability to manipulate magnetic particles in fluid flows by means of inhomogeneous magnetic fields is used in a wide range of biomedical applications including magnetic drug targeting (MDT). In MDT, magnetic carrier particles bounded with drug molecules are injected into the vascular system up-stream from the malignant tissue and attracted or retained at the specific region in the body with the help of an external magnetic field. Although the concept of MDT has been around for many years, however, wide spread acceptance of the technique is still looming despite the fact that it has shown some promise in both in vivo and clinical studies. This is because traditional MDT has some inherent limitations. Typically, the magnetic force is not very strong and it is also very short ranged. Since the magnetic force must overcome rather large hydrodynamic forces in the body, MDT applications have been limited to sites located close to the surface of the skin. Even in this most favorable situation, studies have shown that it is difficult to collect appreciable amounts of the MDCPs at the target site. To overcome these limitations of the traditional MDT approach, Ritter and co-workers reported the implant assisted magnetic drug targeting (IA-MDT). In IA-MDT, the magnetic implants are placed strategically at the target site to greatly and locally increase the magnetic force on MDCPs and help to attract and retain the MDCPs at the targeted region. In the present work, we develop a mathematical model to study the capturing of magnetic nanoparticles flowing in a fluid in an implant assisted cylindrical channel under the magnetic field. A coil of ferromagnetic SS 430 has been implanted inside the cylindrical channel to enhance the capturing of magnetic nanoparticles under the magnetic field. The dominant magnetic and drag forces, which significantly affect the capturing of nanoparticles, are incorporated in the model. It is observed through model results that capture efficiency increases from 23 to 51 % as we increase the magnetic field from 0.1 to 0.5 T, respectively. The increase in capture efficiency by increase in magnetic field is because as the magnetic field increases, the magnetization force, which is attractive in nature and responsible to attract or capture the magnetic particles, increases and results the capturing of large number of magnetic particles due to high strength of attractive magnetic force.Keywords: capture efficiency, implant assisted-magnetic drug targeting (IA-MDT), magnetic nanoparticles, modelling
Procedia PDF Downloads 4601006 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 1891005 [Keynote Speech]: Risk Management during the Rendition Process: Use of Screen-Voice Recordings in Translator Training
Authors: Maggie Hui
Abstract:
Risk management is not a new concept; however, it is an uncharted area as applied to the translation process and translator training. Serving as one of the self-discovery activities in their practicum course, a two-cycle experiment was carried out with a class of 13 MA translation students with an attempt to explore their risk management while translating in a simulated setting that involves translator-client relations. To test the effects of the main variable of translators’ interaction with the simulated clients, the researcher employed control-group translators and two experiment groups (with Group A being the translator in Cycle 1 and the client in Cycle 2, and Group B on the client position in Cycle 1 and the translator position in Cycle 2). Experiment cycle 1 aims to explore if there would be any behavioral difference in risk management between translators with interaction with the simulated clients, i.e. experiment group A, and their counterparts without such interaction, i.e. control group. Design of Cycle 2 concerns the order of playing different roles of the translator and client in the experiment, and provides information to compare behavior of translators of the two experiment groups. Since this is process-oriented research, it is necessary to hypothesize what was happening in the translators’ minds. The researcher made use of a user-friendly screen-voice recording freeware to record subjects’ screen activities, including every word the translator typed and every change they made to the rendition, the websites they browsed and the reference tools they used, in addition to the verbalization of their thoughts throughout the process. The research observes the translation procedures subjects considered and finally adopted, and looks into the justifications for their procedures, in order to interpret their risk management. The qualitative and quantitative results of this study have some implications for translator training: (a) the experience of being a client seems to reinforce the translator’s risk aversion; (b) the use of role-playing simulation can empower students’ learning by enhancing their attitudinal or psycho-physiological competence, interpersonal competence and strategic competence; and (c) the screen-voice recordings serve as a helpful tool for learners to reflect on their rendition processes, i.e. what they performed satisfactorily and unsatisfactorily while translating and what they could do for improvement in future translation tasks.Keywords: risk management, screen-voice recordings, simulated translator-client relations, translation pedagogy, translation process-oriented research
Procedia PDF Downloads 2611004 Walking across the Government of Egypt: A Single Country Comparative Study of the Past and Current Condition of the Government of Egypt
Authors: Homyr L. Garcia, Jr., Anne Margaret A. Rendon, Carla Michaela B. Taguinod
Abstract:
Nothing is constant in this world but change. This is the reality wherein a lot of people fail to recognize and maybe, it is because of the fact that some see things that are happening with little value or no value at all until it’s gone. For the past years, Egypt was known for its stable government. It was able to withstand a lot of problems and crisis which challenged their country in ways which can never be imagined. In the present time, it seems like in just a snap of a finger, the said stability vanished and it was immediately replaced by a crisis which resulted to a failure in some parts of their government. In addition, this problem continued to worsen and the current situation of Egypt is just a reflection or a result of it. On the other hand, as the researchers continued to study the reasons why the government of Egypt is unstable, they concluded that there might be a possibility that they will be able to produce ways in which their country could be helped or improved. The instability of the government of Egypt is the product of combining all the problems which affects the lives of the people. Some of the reasons that the researchers found are the following: 1) unending doubts of the people regarding the ruling capacity of elected presidents, 2) removal of President Mohamed Morsi in position, 3) economic crisis, 4) a lot of protests and revolution happened, 5) resignation of the long term President Hosni Mubarak and 6) the office of the President is most likely available only to the chosen successor. Also, according to previous researches, there are two plausible scenarios for the instability of Egypt: 1) a military intervention specifically the Supreme Council of the Armed Forces or SCAF, resulting from a contested succession and 2) an Islamist push for political power which highlights the claim that religion is a hindrance towards the development of their country and government. From the eight possible reasons, the researchers decided that they will be focusing on economic crisis since the instability is more clearly seen in the country’s economy which directly affects the people and the government itself. In addition, they made a hypothesis which states that stable economy is a prerequisite towards a stable government. If they will be able to show how this claim is true by using the Social Autopsy Research Design for the qualitative method and Pearson’s correlation coefficient for the quantitative method, the researchers might be able to produce a proposal on how Egypt can stabilize their government and avoid such problems. Also, the hypothesis will be based from the Rational Action Theory which is a theory for understanding and modeling social and economy as well as individual behavior.Keywords: Pearson’s correlation coefficient, rational action theory, social autopsy research design, supreme council of the armed forces (SCAF)
Procedia PDF Downloads 4081003 Implementing Lesson Study in Qatari Mathematics Classroom: A Case Study of a New Experience for Teachers through IMPULS-QU Lesson Study Program
Authors: Areej Isam Barham
Abstract:
The implementation of Japanese lesson study approach in the mathematics classroom has been grown worldwide as a model of professional development for teachers. In Qatar, the implementation of IMPULS-QU lesson study program aimed to establish a robust organizational improvement model of professional development for mathematics teachers in Qatar schools. This study describes the implementation of a lesson study model at Al-Markhyia Independent Primary School through different stages; and discusses how the planning process, the research lesson, and the post discussion participates in providing teachers and researchers with a successful research lesson for teacher professional development. The research followed a case study approach in one mathematics classroom. Two teachers and one professional development specialist participated the planning process. One teacher conducted the research lesson study by introducing a problem solving related to the concept of the ‘Mean’ in a mathematics class, 21 students in grade 6 participated in solving the mathematic problem, 11 teachers, 4 professional development specialists, and 4 mathematics professors observed the research lesson. All previous participants except the students participated in a pre and post-lesson discussion within this research. This study followed a qualitative research approach by analyzing the collected data through different stages in the research lesson study. Observation, field notes, and semi-structured interviews conducted to collect data to achieve the research aims. One feature of this lesson study research is that this research describes the implementation for a lesson study as a new experience for one mathematics teacher and 21 students after 3 years of conducting IMPULS-QU project in Al-Markhyia school. The research describes various stages through the implementation of this lesson study model starting from the planning process and ending by the post discussion process. Findings of the study also address the impact of lesson study approach in teaching mathematics for the development of teachers from their point views. Results of the study show the benefits of using lesson study from the point views of participated teachers, theory perceptions about the essential features of lesson study, and their needs for future development. The discussion of the study addresses different features and issues related to the implementation of IMPULS-QU lesson study model in the mathematics classroom. In the light of the study, the research presents recommendations and suggestions for future professional development.Keywords: lesson study, mathematics education, mathematics teaching experience, teacher professional development
Procedia PDF Downloads 1841002 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability
Procedia PDF Downloads 1521001 Poly(ε-caprolactone)/Halloysite Nanotube Nanocomposites Scaffolds for Tissue Engineering
Authors: Z. Terzopoulou, I. Koliakou, D. Bikiaris
Abstract:
Tissue engineering offers a new approach to regenerate diseased or damaged tissues such as bone. Great effort is devoted to eliminating the need of removing non-degradable implants at the end of their life span, with biodegradable polymers playing a major part. Poly(ε-caprolactone) (PCL) is one of the best candidates for this purpose due to its high permeability, good biodegradability and exceptional biocompatibility, which has stimulated extensive research into its potential application in the biomedical fields. However, PCL degrades much slower than other known biodegradable polymers and has a total degradation of 2-4 years depending on the initial molecular weight of the device. This is due to its relatively hydrophobic character and high crystallinity. Consequently, much attention has been given to the tunable degradation of PCL to meet the diverse requirements of biomedicine. Poly(ε-caprolactone) (PCL) is a biodegradable polyester that lacks bioactivity, so when used in bone tissue engineering, new bone tissue cannot bond tightly on the polymeric surface. Therefore, it is important to incorporate reinforcing fillers into PCL matrix in order to result in a promising combination of bioactivity, biodegradability, and strength. Natural clay halloysite nanotubes (HNTs) were incorporated into PCL polymeric matrix, via in situ ring-opening polymerization of caprolactone, in concentrations 0.5, 1 and 2.5 wt%. Both unmodified and modified with aminopropyltrimethoxysilane (APTES) HNTs were used in this study. The effect of nanofiller concentration and functionalization with end-amino groups on the physicochemical properties of the prepared nanocomposites was studied. Mechanical properties were found enhanced after the incorporation of nanofillers, while the modification increased further the values of tensile and impact strength. Thermal stability of PCL was not affected by the presence of nanofillers, while the crystallization rate that was studied by Differential Scanning Calorimetry (DSC) and Polarized Light Optical Microscopy (POM) increased. All materials were subjected to enzymatic hydrolysis in phosphate buffer in the presence of lipases. Due to the hydrophilic nature of HNTs, the biodegradation rate of nanocomposites was higher compared to neat PCL. In order to confirm the effect of hydrophilicity, contact angle measurements were also performed. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. All scaffolds were tested in relevant cell culture using osteoblast-like cells (MG-63) to demonstrate their biocompatibilityKeywords: biomaterials, nanocomposites, scaffolds, tissue engineering
Procedia PDF Downloads 3151000 Transverse Behavior of Frictional Flat Belt Driven by Tapered Pulley -Change of Transverse Force Under Driving State–
Authors: Satoko Fujiwara, Kiyotaka Obunai, Kazuya Okubo
Abstract:
A skew is one of important problems for designing the conveyor and transmission with frictional flat belt, in which running belt is deviated in width direction due to the transverse force applied to the belt. The skew often not only degrades the stability of the path of belt but also causes some damages of the belt and auxiliary machines. However, the transverse behavior such as the skew has not been discussed quantitatively in detail for frictional belts. The objective of this study is to clarify the transverse behavior of frictional flat belt driven by tapered pulley. Commercially available rubber flat belt reinforced by polyamide film was prepared as the test belt where the thickness and length were 1.25 mm and 630 mm, respectively. Test belt was driven between two pulleys made of aluminum alloy, where diameter and inter-axial length were 50 mm and 150 mm, respectively. Some tapered pulleys were applied where tapered angles were 0 deg (for comparison), 2 deg, 4 deg, and 6 deg. In order to alternatively investigate the transverse behavior, the transverse force applied to the belt was measured when the skew was constrained at the string under driving state. The transverse force was measured by a load cell having free rollers contacting on the side surface of the belt when the displacement in the belt width direction was constrained. The conditions of observed bending stiffness in-plane of the belt were changed by preparing three types of belts (the width of the belt was 20, 30, and 40 mm) where their observed stiffnesses were changed. The contributions of the bending stiffness in-plane of belt and initial inter-axial force to the transverse were discussed in experiments. The inter-axial force was also changed by setting a distance (about 240 mm) between the two pulleys. Influence of observed bending stiffness in-plane of the belt and initial inter-axial force on the transverse force were investigated. The experimental results showed that the transverse force was increased with an increase of observed bending stiffness in-plane of the belt and initial inter-axial force. The transverse force acting on the belt running on the tapered pulley was classified into multiple components. Those were components of forces applied with the deflection of the inter-axial force according to the change of taper angle, the resultant force by the bending moment applied on the belt winding around the tapered pulley, and the reaction force applied due to the shearing deformation. The calculation result of the transverse force was almost agreed with experimental data when those components were formulated. It was also shown that the most contribution was specified to be the shearing deformation, regardless of the test conditions. This study found that transverse behavior of frictional flat belt driven by tapered pulley was explained by the summation of those components of forces.Keywords: skew, frictional flat belt, transverse force, tapered pulley
Procedia PDF Downloads 145999 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels
Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi
Abstract:
Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.Keywords: glutamine, resistance traning, immuglobulins, cortisol
Procedia PDF Downloads 478