Search results for: simulink simulation model
8218 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 4218217 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 1278216 Fuzzy Climate Control System for Hydroponic Green Forage Production
Authors: Germán Díaz Flórez, Carlos Alberto Olvera Olvera, Domingo José Gómez Meléndez, Francisco Eneldo López Monteagudo
Abstract:
In recent decades, population growth has exerted great pressure on natural resources. Two of the most scarce and difficult to obtain resources, arable land, and water, are closely interrelated, to the satisfaction of the demand for food production. In Mexico, the agricultural sector uses more than 70% of water consumption. Therefore, maximize the efficiency of current production systems is inescapable. It is essential to utilize techniques and tools that will enable us to the significant savings of water, labor and fertilizer. In this study, we present a production module of hydroponic green forage (HGF), which is a viable alternative in the production of livestock feed in the semi-arid and arid zones. The equipment in addition to having a forage production module, has a climate and irrigation control system that operated with photovoltaics. The climate control, irrigation and power management is based on fuzzy control techniques. The fuzzy control provides an accurate method in the design of controllers for nonlinear dynamic physical phenomena such as temperature and humidity, besides other as lighting level, aeration and irrigation control using heuristic information. In this working, firstly refers to the production of the hydroponic green forage, suitable weather conditions and fertigation subsequently presents the design of the production module and the design of the controller. A simulation of the behavior of the production module and the end results of actual operation of the equipment are presented, demonstrating its easy design, flexibility, robustness and low cost that represents this equipment in the primary sector.Keywords: fuzzy, climate control system, hydroponic green forage, forage production module
Procedia PDF Downloads 3998215 A Three-Dimensional TLM Simulation Method for Thermal Effect in PV-Solar Cells
Authors: R. Hocine, A. Boudjemai, A. Amrani, K. Belkacemi
Abstract:
Temperature rising is a negative factor in almost all systems. It could cause by self heating or ambient temperature. In solar photovoltaic cells this temperature rising affects on the behavior of cells. The ability of a PV module to withstand the effects of periodic hot-spot heating that occurs when cells are operated under reverse biased conditions is closely related to the properties of the cell semi-conductor material. In addition, the thermal effect also influences the estimation of the maximum power point (MPP) and electrical parameters for the PV modules, such as maximum output power, maximum conversion efficiency, internal efficiency, reliability, and lifetime. The cells junction temperature is a critical parameter that significantly affects the electrical characteristics of PV modules. For practical applications of PV modules, it is very important to accurately estimate the junction temperature of PV modules and analyze the thermal characteristics of the PV modules. Once the temperature variation is taken into account, we can then acquire a more accurate MPP for the PV modules, and the maximum utilization efficiency of the PV modules can also be further achieved. In this paper, the three-Dimensional Transmission Line Matrix (3D-TLM) method was used to map the surface temperature distribution of solar cells while in the reverse bias mode. It was observed that some cells exhibited an inhomogeneity of the surface temperature resulting in localized heating (hot-spot). This hot-spot heating causes irreversible destruction of the solar cell structure. Hot spots can have a deleterious impact on the total solar modules if individual solar cells are heated. So, the results show clearly that the solar cells are capable of self-generating considerable amounts of heat that should be dissipated very quickly to increase PV module's lifetime.Keywords: thermal effect, conduction, heat dissipation, thermal conductivity, solar cell, PV module, nodes, 3D-TLM
Procedia PDF Downloads 3918214 Determination of Optimum Conditions for the Leaching of Oxidized Copper Ores with Ammonium Nitrate
Authors: Javier Paul Montalvo Andia, Adriana Larrea Valdivia, Adolfo Pillihuaman Zambrano
Abstract:
The most common lixiviant in the leaching process of copper minerals is H₂SO₄, however, the current situation requires more environmentally friendly reagents and in certain situations that have a lower consumption due to the presence of undesirable gangue as muscovite or kaolinite that can make the process unfeasible. The present work studied the leaching of an oxidized copper mineral in an aqueous solution of ammonium nitrate, in order to obtain the optimum leaching conditions of the copper contained in the malachite mineral from Peru. The copper ore studied comes from a deposit in southern Peru and was characterized by X-ray diffractometer, inductively coupled-plasma emission spectrometer (ICP-OES) and atomic absorption spectrophotometry (AAS). The experiments were developed in batch reactor of 600 mL where the parameters as; temperature, pH, ammonium nitrate concentration, particle size and stirring speed were controlled according to experimental planning. The sample solution was analyzed for copper by atomic absorption spectrophotometry (AAS). A simulation in the HSC Chemistry 6.0 program showed that the predominance of the copper compounds of a Cu-H₂O aqueous system is altered by the presence in the system of ammonium complexes, the compound being thermodynamically more stable Cu(NH3)₄²⁺, which predominates in pH ranges from 8.5 to 10 at a temperature of 25 °C. The optimum conditions for copper leaching of the malachite mineral were a stirring speed of 600 rpm, an ammonium nitrate concentration of 4M, a particle diameter of 53 um and temperature of 62 °C. These results showed that the leaching of copper increases with increasing concentration of the ammonium solution, increasing the stirring rate, increasing the temperature and decreasing the particle diameter. Finally, the recovery of copper in optimum conditions was above 80%.Keywords: ammonium nitrate, malachite, copper oxide, leaching
Procedia PDF Downloads 1928213 The MoEDAL-MAPP* Experiment - Expanding the Discovery Horizon of the Large Hadron Collider
Authors: James Pinfold
Abstract:
The MoEDAL (Monopole and Exotics Detector at the LHC) experiment deployed at IP8 on the Large Hadron Collider ring was the first dedicated search experiment to take data at the Large Hadron Collider (LHC) in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive, slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHC’s Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion. In more detail, we will report our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions. The MoEDAL detector, originally the first dedicated search detector at the LHC, is being reinstalled for LHC’s Run-3 to continue the search for electrically and magnetically charged HIPs with enhanced instantaneous luminosity, detector efficiency and a factor of ten lower thresholds for HIPs. As part of this effort, we will search for massive l long-lived, singly and multiply charged particles from various scenarios for which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), is now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, In conjunction with MoEDAL’s trapping detector, the MAPP detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. The addition of an Outrigger detector for MAPP-1 to increase its acceptance for more massive milli-charged particles is currently in the Technical Proposal stage. Additionally, we will briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). This experiment phase is designed to maximize MoEDAL-MAPP’s sensitivity to very long-lived neutral messengers of physics beyond the Standard Model. We envisage this detector being deployed in the UGC1 gallery near IP8.Keywords: LHC, beyond the standard model, dedicated search experiment, highly ionizing particles, long-lived particles, milli-charged particles
Procedia PDF Downloads 728212 Numerical Study on the Static Characteristics of Novel Aerostatic Thrust Bearings Possessing Elastomer Capillary Restrictor and Bearing Surface
Authors: S. W. Lo, S.-H. Lu, Y. H. Guo, L. C. Hsu
Abstract:
In this paper, a novel design of aerostatic thrust bearing is proposed and is analyzed numerically. The capillary restrictor and bearing disk are made of elastomer like silicone and PU. The viscoelasticity of elastomer helps the capillary expand for more air flux and at the same time, allows conicity of the bearing surface to form when the air pressure is enhanced. Therefore, the bearing has the better ability of passive compensation. In the present example, as compared with the typical model, the new designs can nearly double the load capability and offer four times static stiffness.Keywords: aerostatic, bearing, elastomer, static stiffness
Procedia PDF Downloads 3808211 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 1558210 Microbial Effects of Iron Elution from Hematite into Seawater Mediated via Dissolved Organic Matter
Authors: Apichaya Aneksampant, Xuefei Tu, Masami Fukushima, Mitsuo Yamamoto
Abstract:
The restoration of seaweed beds recovery has been developed using a fertilization technique for supplying dissolved iron to barren coastal areas. The fertilizer is composed of iron oxides as a source of iron and compost as humic substance (HS) source, which can serve as chelator of iron to stabilize the dissolved species under oxic seawater condition. However, elution mechanisms of iron from iron oxide surfaces have not sufficiently elucidated. In particular, roles of microbial activities in the elution of iron from the fertilizer are not sufficiently understood. In the present study, a fertilizer (iron oxide/compost = 1/1, v/v) was incubated in a water tank at Mashike coast, Hokkaido Japan. Microorganisms in the 6-month fertilizer were isolated and identified as Exiguobacterium oxidotolerans sp. (T-2-2). The identified bacteria were inoculated to perform iron elution test in a postgate B medium, prepared in artificial seawater. Hematite was used as a model iron oxide and anthraquinone-2,7-disolfonate (AQDS) as a model for HSs. The elution test performed in presence and absence of bacteria inoculation. ICP-AES was used to analyze total iron and a colorimetric technique using ferrozine employed for the determination of ferrous ion. During the incubation period, sample contained hematite and T-2-2 in both presence and absence of AQDS continuously showed the iron elution and reached at the highest concentration after 9 days of incubation and then slightly decrease to stabilize within 20 days. Comparison to the sample without T-2-2, trace amount of iron was observed, suggesting that iron elution to seawater can be attributed to bacterial activities. The levels of total organic carbon (TOC) in the culture solution with hematite decreased. This may be to the adsorption of organic compound, AQDS, to hematite surfaces. The decrease in UV-vis absorption of AQDS in the culture solution also support the results of TOC that AQDS was adsorbed to hematite surfaces. AQDS can enhance the iron elution, while the adsorption of organic matter suppresses the iron elution from hematite.Keywords: anthraquinone-2, 7-disolfonate, barren ground, E.oxidotolerans sp., hematite, humic substances, iron elution
Procedia PDF Downloads 3818209 Numerical Assessment of Fire Characteristics with Bodies Engulfed in Hydrocarbon Pool Fire
Authors: Siva Kumar Bathina, Sudheer Siddapureddy
Abstract:
Fires accident becomes even worse when the hazardous equipment like reactors or radioactive waste packages are engulfed in fire. In this work, large-eddy numerical fire simulations are performed using fire dynamic simulator to predict the thermal behavior of such bodies engulfed in hydrocarbon pool fires. A radiatively dominated 0.3 m circular burner with n-heptane as the fuel is considered in this work. The fire numerical simulation results without anybody inside the fire are validated with the reported experimental data. The comparison is in good agreement for different flame properties like predicted mass burning rate, flame height, time-averaged center-line temperature, time-averaged center-line velocity, puffing frequency, the irradiance at the surroundings, and the radiative heat feedback to the pool surface. Cask of different sizes is simulated with SS304L material. The results are independent of the material of the cask simulated as the adiabatic surface temperature concept is employed in this study. It is observed that the mass burning rate increases with the blockage ratio (3% ≤ B ≤ 32%). However, the change in this increment is reduced at higher blockage ratios (B > 14%). This is because the radiative heat feedback to the fuel surface is not only from the flame but also from the cask volume. As B increases, the volume of the cask increases and thereby increases the radiative contribution to the fuel surface. The radiative heat feedback in the case of the cask engulfed in the fire is increased by 2.5% to 31% compared to the fire without cask.Keywords: adiabatic surface temperature, fire accidents, fire dynamic simulator, radiative heat feedback
Procedia PDF Downloads 1318208 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System
Authors: Nicolas M. Beleski, Gustavo A. G. Lugo
Abstract:
Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind
Procedia PDF Downloads 1358207 Isolation and Transplantation of Hepatocytes in an Experimental Model
Authors: Inas Raafat, Azza El Bassiouny, Waldemar L. Olszewsky, Nagui E. Mikhail, Mona Nossier, Nora E. I. El-Bassiouni, Mona Zoheiry, Houda Abou Taleb, Noha Abd El-Aal, Ali Baioumy, Shimaa Attia
Abstract:
Background: Orthotopic liver transplantation is an established treatment for patients with severe acute and end-stage chronic liver disease. The shortage of donor organs continues to be the rate-limiting factor for liver transplantation throughout the world. Hepatocyte transplantation is a promising treatment for several liver diseases and can, also, be used as a "bridge" to liver transplantation in cases of liver failure. Aim of the work: This study was designed to develop a highly efficient protocol for isolation and transplantation of hepatocytes in experimental Lewis rat model to provide satisfactory guidelines for future application on humans.Materials and Methods: Hepatocytes were isolated from the liver by double perfusion technique and bone marrow cells were isolated by centrifugation of shafts of tibia and femur of donor Lewis rats. Recipient rats were subjected to sub-lethal dose of irradiation 2 days before transplantation. In a laparotomy operation the spleen was injected by freshly isolated hepatocytes and bone marrow cells were injected intravenously. The animals were sacrificed 45 day latter and splenic sections were prepared and stained with H & E, PAS AFP and Prox1. Results: The data obtained from this study showed that the double perfusion technique is successful in separation of hepatocytes regarding cell number and viability. Also the method used for bone marrow cells separation gave excellent results regarding cell number and viability. Intrasplenic engraftment of hepatocytes and live tissue formation within the splenic tissue were found in 70% of cases. Hematoxylin and eosin stained splenic sections from 7 rats showed sheets and clusters of cells among the splenic tissues. Periodic Acid Schiff stained splenic sections from 7 rats showed clusters of hepatocytes with intensely stained pink cytoplasmic granules denoting the presence of glycogen. Splenic sections from 7 rats stained with anti-α-fetoprotein antibody showed brownish cytoplasmic staining of the hepatocytes denoting positive expression of AFP. Splenic sections from 7 rats stained with anti-Prox1 showed brownish nuclear staining of the hepatocytes denoting positive expression of Prox1 gene on these cells. Also, positive expression of Prox1 gene was detected on lymphocytes aggregations in the spleens. Conclusions: Isolation of liver cells by double perfusion technique using collagenase buffer is a reliable method that has a very satisfactory yield regarding cell number and viability. The intrasplenic route of transplantation of the freshly isolated liver cells in an immunocompromised model was found to give good results regarding cell engraftment and tissue formation. Further studies are needed to assess function of engrafted hepatocytes by measuring prothrombin time, serum albumin and bilirubin levels.Keywords: Lewis rats, hepatocytes, BMCs, transplantation, AFP, Prox1
Procedia PDF Downloads 3178206 Clean Sky 2 Project LiBAT: Light Battery Pack for High Power Applications in Aviation – Simulation Methods in Early Stage Design
Authors: Jan Dahlhaus, Alejandro Cardenas Miranda, Frederik Scholer, Maximilian Leonhardt, Matthias Moullion, Frank Beutenmuller, Julia Eckhardt, Josef Wasner, Frank Nittel, Sebastian Stoll, Devin Atukalp, Daniel Folgmann, Tobias Mayer, Obrad Dordevic, Paul Riley, Jean-Marc Le Peuvedic
Abstract:
Electrical and hybrid aerospace technologies pose very challenging demands on the battery pack – especially with respect to weight and power. In the Clean Sky 2 research project LiBAT (funded by the EU), the consortium is currently building an ambitious prototype with state-of-the art cells that shows the potential of an intelligent pack design with a high level of integration, especially with respect to thermal management and power electronics. For the latter, innovative multi-level-inverter technology is used to realize the required power converting functions with reduced equipment. In this talk the key approaches and methods of the LiBat project will be presented and central results shown. Special focus will be set on the simulative methods used to support the early design and development stages from an overall system perspective. The applied methods can efficiently handle multiple domains and deal with different time and length scales, thus allowing the analysis and optimization of overall- or sub-system behavior. It will be shown how these simulations provide valuable information and insights for the efficient evaluation of concepts. As a result, the construction and iteration of hardware prototypes has been reduced and development cycles shortened.Keywords: electric aircraft, battery, Li-ion, multi-level-inverter, Novec
Procedia PDF Downloads 1718205 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials
Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita
Abstract:
Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides
Procedia PDF Downloads 2318204 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction
Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian
Abstract:
Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.Keywords: acute MI, mortality, heart failure, arrhythmia
Procedia PDF Downloads 1258203 Predicting College Students’ Happiness During COVID-19 Pandemic; Be optimistic and Well in College!
Authors: Michiko Iwasaki, Jane M. Endres, Julia Y. Richards, Andrew Futterman
Abstract:
The present study aimed to examine college students’ happiness during COVID19-pandemic. Using the online survey data from 96 college students in the U.S., a regression analysis was conducted to predict college students’ happiness. The results indicated that a four-predictor model (optimism, college students’ subjective wellbeing, coronavirus stress, and spirituality) explained 57.9% of the variance in student’s subjective happiness, F(4,77)=26.428, p<.001, R2=.579, 95% CI [.41,.66]. The study suggests the importance of learned optimism among college students.Keywords: COVID-19, optimism, spirituality, well-being
Procedia PDF Downloads 2308202 A Qualitative Study Exploring Factors Influencing the Uptake of and Engagement with Health and Wellbeing Smartphone Apps
Authors: D. Szinay, O. Perski, A. Jones, T. Chadborn, J. Brown, F. Naughton
Abstract:
Background: The uptake of health and wellbeing smartphone apps is largely influenced by popularity indicators (e.g., rankings), rather than evidence-based content. Rapid disengagement is common. This study aims to explore how and why potential users 1) select and 2) engage with such apps, and 3) how increased engagement could be promoted. Methods: Semi-structured interviews and a think-aloud approach were used to allow participants to verbalise their thoughts whilst searching for a health or wellbeing app online, followed by a guided search in the UK National Health Service (NHS) 'Apps Library' and Public Health England’s (PHE) 'One You' website. Recruitment took place between June and August 2019. Adults interested in using an app for behaviour change were recruited through social media. Data were analysed using the framework approach. The analysis is both inductive and deductive, with the coding framework being informed by the Theoretical Domains Framework. The results are further mapped onto the COM-B (Capability, Opportunity, Motivation - Behaviour) model. The study protocol is registered on the Open Science Framework (https://osf.io/jrkd3/). Results: The following targets were identified as playing a key role in increasing the uptake of and engagement with health and wellbeing apps: 1) psychological capability (e.g., reduced cognitive load); 2) physical opportunity (e.g., low financial cost); 3) social opportunity (e.g., embedded social media); 4) automatic motivation (e.g., positive feedback). Participants believed that the promotion of evidence-based apps on NHS-related websites could be enhanced through active promotion on social media, adverts on the internet, and in general practitioner practices. Future Implications: These results can inform the development of interventions aiming to promote the uptake of and engagement with evidence-based health and wellbeing apps, a priority within the UK NHS Long Term Plan ('digital first'). The targets identified across the COM-B domains could help organisations that provide platforms for such apps to increase impact through better selection of apps.Keywords: behaviour change, COM-B model, digital health, mhealth
Procedia PDF Downloads 1728201 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term
Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu
Abstract:
In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records
Procedia PDF Downloads 2198200 Competitiveness of Animation Industry: The Case of Thailand
Authors: T. Niracharapa
Abstract:
The research studied and examined the competitiveness of the animation industry in Thailand. Data were collected based on articles, related reports and websites, news, research, and interviews of key persons from both public and private sectors. The diamond model was used to analyze the study. The major factor driving the Thai animation industry forward includes a quality workforce, their creativity and strong associations. However, discontinuity in government support, infrastructure, marketing, IP creation and financial constraints were factors keeping the Thai animation industry less competitive in the global market.Keywords: animation, competitiveness, government, Thailand, market
Procedia PDF Downloads 4538199 Automatic Vowel and Consonant's Target Formant Frequency Detection
Authors: Othmane Bouferroum, Malika Boudraa
Abstract:
In this study, a dual exponential model for CV formant transition is derived from locus theory of speech perception. Then, an algorithm for automatic vowel and consonant’s target formant frequency detection is developed and tested on real speech. The results show that vowels and consonants are detected through transitions rather than their small stable portions. Also, vowel reduction is clearly observed in our data. These results are confirmed by the observations made in perceptual experiments in the literature.Keywords: acoustic invariance, coarticulation, formant transition, locus equation
Procedia PDF Downloads 2768198 School Students’ Career Guidance in the Context of Inclusive Education in Kazakhstan: Experience and Perspectives
Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip
Abstract:
The article presents the main results of the study conducted within the grant project «Organizational and methodological foundations for ensuring the inclusiveness of school students’ career guidance» (2022-2024). The main aim of the project is to study the issue of the absence of developed mechanisms, coordinating the activities of all stakeholders in preparing school students for conscious career choice, taking into account their individual opportunities and special educational needs. To achieve the aim of the project, according to the implementation plan, the analysis of foreign and national literature on the studied problem, as well as the study of the state of school students’ career guidance and their socialization in the context of inclusive education were conducted, the international experience on this issue was explored. The analysis of the national literature conducted by the authors has shown the State’s annual increase in the number of students with special educational needs as well as the rapid demand of labour market, influencing their professional self-determination in modern society. The participants from 5 State’s regions, including students, their parents, general secondary schools administration and educators, as well as employers, took part in the study, taking into account the geographical location: south, north, west, centre, and the cities of republican significance. To ensure the validity of the study’s results, the triangulation method was utilised, including both qualitative and quantitative methods. The data were analysed independently and compared with each other. Ethical principles were considered during all stages of the study. The characteristics of the system of career guidance in the modern school, the role and the involvement of stakeholders in the system of career guidance, the opinions of educators on school students’ preparedness for career choice, and the factors impeding the effectiveness of career guidance in schools were examined. The problem of stakeholders’ disunity and inconsistency, causing the systemic labor market distortions, the growth of low-skilled labor, and the unemployed, including people with special educational needs, were revealed. The other issue identified by the researchers was educators’ insufficient readiness for students’ career choice preparation in the context of inclusive education. To study cutting-edge experience in organizing a system of career guidance for young people and develop mechanisms coordinating the actions of all stakeholders in preparing students for career choice, the institutions of career guidance in France, Japan, and Germany were explored by the researchers. To achieve the aim of the project, the systemic contemporary model of school students’ professional self-determination, considering their individual opportunities and special educational needs, has been developed based on the study results and international experience. The main principles of this model are consistency, accessibility, inclusiveness, openness, coherence, continuity. The perspectives of students’ career guidance development in the context of inclusive education have been suggested.Keywords: career guidance, inclusive education, model of school students’ professional self-determination, psychological and pedagogical support, special educational needs
Procedia PDF Downloads 628197 Performance Evaluation of Routing Protocol in Cognitive Radio with Multi Technological Environment
Authors: M. Yosra, A. Mohamed, T. Sami
Abstract:
Over the past few years, mobile communication technologies have seen significant evolution. This fact promoted the implementation of many systems in a multi-technological setting. From one system to another, the Quality of Service (QoS) provided to mobile consumers gets better. The growing number of normalized standards extends the available services for each consumer, moreover, most of the available radio frequencies have already been allocated, such as 3G, Wifi, Wimax, and LTE. A study by the Federal Communications Commission (FCC) found that certain frequency bands are partially occupied in particular locations and times. So, the idea of Cognitive Radio (CR) is to share the spectrum between a primary user (PU) and a secondary user (SU). The main objective of this spectrum management is to achieve a maximum rate of exploitation of the radio spectrum. In general, the CR can greatly improve the quality of service (QoS) and improve the reliability of the link. The problem will reside in the possibility of proposing a technique to improve the reliability of the wireless link by using the CR with some routing protocols. However, users declared that the links were unreliable and that it was an incompatibility with QoS. In our case, we choose the QoS parameter "bandwidth" to perform a supervised classification. In this paper, we propose a comparative study between some routing protocols, taking into account the variation of different technologies on the existing spectral bandwidth like 3G, WIFI, WIMAX, and LTE. Due to the simulation results, we observe that LTE has significantly higher availability bandwidth compared with other technologies. The performance of the OLSR protocol is better than other on-demand routing protocols (DSR, AODV and DSDV), in LTE technology because of the proper receiving of packets, less packet drop and the throughput. Numerous simulations of routing protocols have been made using simulators such as NS3.Keywords: cognitive radio, multi technology, network simulator (NS3), routing protocol
Procedia PDF Downloads 668196 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks
Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
Abstract:
The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’
Procedia PDF Downloads 3738195 Assessment of Mortgage Applications Using Fuzzy Logic
Authors: Swathi Sampath, V. Kalaichelvi
Abstract:
The assessment of the risk posed by a borrower to a lender is one of the common problems that financial institutions have to deal with. Consumers vying for a mortgage are generally compared to each other by the use of a number called the Credit Score, which is generated by applying a mathematical algorithm to information in the applicant’s credit report. The higher the credit score, the lower the risk posed by the candidate, and the better he is to be taken on by the lender. The objective of the present work is to use fuzzy logic and linguistic rules to create a model that generates Credit Scores.Keywords: credit scoring, fuzzy logic, mortgage, risk assessment
Procedia PDF Downloads 4118194 Design of a 4-DOF Robot Manipulator with Optimized Algorithm for Inverse Kinematics
Authors: S. Gómez, G. Sánchez, J. Zarama, M. Castañeda Ramos, J. Escoto Alcántar, J. Torres, A. Núñez, S. Santana, F. Nájera, J. A. Lopez
Abstract:
This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.Keywords: kinematics, degree of freedom, optimization, robot manipulator
Procedia PDF Downloads 4708193 Decreased Non-Communicable Disease by Surveillance, Control, Prevention Systems, and Community Engagement Process in Phayao, Thailand
Authors: Vichai Tienthavorn
Abstract:
Background: Recently, the patients of non-communicable diseases (NCDs) are increasing in Thailand; especially hypertension and diabetes. Hypertension and Diabetes patients were found to be of 3.7 million in 2008. The varieties of human behaviors have been extensively changed in health. Hence, Thai Government has a policy to reduce NCDs. Generally, primary care plays an important role in treatment using medical process. However, NCDs patients have not been decreased. Objectives: This study not only reduce the patient and mortality rate but also increase the quality of life, could apply in different areas and propose to be the national policy, effectively for a long term operation. Methods: Here we report that primary health care (PHC), which is a primary process to screening, rapidly seek the person's risk. The screening tool of the study was Vichai's 7 color balls model, the medical education tool to transfer knowledge from student health team to community through health volunteers, creating community engagement in terms of social participation. It was found that people in community were realized in their health and they can evaluate the level of risk using this model. Results: Projects implementation (2015) in Nong Lom Health Center in Phayao (target group 15-65 years, 2529); screening hypertension coveraged 99.01%, risk group (light green) was decreased to normal group (white) from 1806 to 1893, significant severe patient (red) was decreased to moderate (orange) from 10 to 5. Health Program in behaving change with best practice of 3Es (Eating, Exercise, Emotion) and 3Rs (Reducing tobacco, alcohol, obesity) were applied in risk group; and encourage strictly medication, investigation in severe patient (red). Conclusion: This is the first demonstration of knowledge transfer to community engagement by student, which is the sustainable education in PHC.Keywords: non-communicable disease, surveillance control and prevention systems, community engagement, primary health care
Procedia PDF Downloads 2548192 In Vitro and in Vivo Evaluation of Nano Collagen Molecules to Enhance Mesenchymal Stem Cells Differentiate into Insulin Producing Cells
Authors: Chin-Tsu Ma, Yi-Jhen Wu, Hsia Ying Cheng, Han Hsiang Huang, Shyh Ming Kuo
Abstract:
The use of specific molecules including nutrients and pharmacological agents has been tried in modulation of stem cells differentiation (MSCs) to insulin producing cells. The aim of this study is to investigate the ability of nano collagen molecules (nutrient or scaffold) to enhance the MSCs differentiation into insulin-producing cells in combination with nicotinamide and exendin-4 (pharmacological agents) in vitro and in vivo. The results demonstrated that the cells exhibit morphologically islet-like clusters after treatment with nano collagen molecules, nicotinamide and exendin-4. MSCs extra treated with nano collagen molecules showed significant increases in Nkx6.1 and insulin mRNA expression at 14-d and 21-d culture compared with those merely treated with nicotinamide and exendin-4. Early 7-day elevation in PDX-1 mRNA expression was observed. Furthermore, the MSCs exposed to nano collagen molecules produced the highest secretion of insulin (p < 0.05). Type-2 diabetes induced by high-fat diet and low dose of streptozotocin in rat model was built in this study. This rat exhibited higher food intake, water intake, lower glucose tolerance, lower-insulin tolerance, and higher HbA1C (significant increases, p < 0.01) as compared with the normal rat that demonstrated the model of type-2 diabetes was successfully built. Biopsy examinations also showed that obvious destruction of islet. After injection of differentiated MSCs into the destructed pancreas of diabetes rat, more regenerated islet were observed at the rats that treated with nano collagen molecules and exhibited much lower HbA1C as compared with the normal rat and diabetes rat after 4 weeks (significant deceases, p < 0.001). These results indicate that the culturing MSCs with nano collagen molecules, nicotinamide, and exendin-4 are beneficial for MSCs differentiation into islet-like cells. These nano collagen molecules may lead to alternations or up-regulation of gene expression and influence the differentiated outcomes induced by nicotinamide and exendin-4.Keywords: nano collagen molecules, nicotinamide, MSCs, diabetes
Procedia PDF Downloads 4118191 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization
Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur
Abstract:
In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization
Procedia PDF Downloads 4498190 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 5598189 Robotic Exoskeleton Response During Infant Physiological Knee Kinematics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 124