Search results for: open queueing network
1446 Urban Compactness and Sustainability: Beijing Experience
Authors: Xilu Liu, Ameen Farooq
Abstract:
Beijing has several compact residential housing settings in many of its urban districts. The study in this paper reveals that urban compactness, as predictor of density, may carry an altogether different meaning in the developing world when compared to the U.S for achieving objectives of urban sustainability. Recent urban design studies in the U.S are debating for compact and mixed-use higher density housing to achieve sustainable and energy efficient living environments. While the concept of urban compactness is widely accepted as an approach in modern architectural and urban design fields, this belief may not directly carry well into all areas within cities of developing countries. Beijing’s technology-driven economy, with its historic and rich cultural heritage and a highly speculated real-estate market, extends its urban boundaries into multiple compact urban settings of varying scales and densities. The accelerated pace of migration from the countryside for better opportunities has led to unsustainable and uncontrolled buildups in order to meet the growing population demand within and outside of the urban center. This unwarranted compactness in certain urban zones has produced an unhealthy physical density with serious environmental and ecological challenging basic living conditions. In addition, crowding, traffic congestion, pollution and limited housing surrounding this compactness is a threat to public health. Several residential blocks in close proximity to each other were found quite compacted, or ill-planned, with residential sites due to lack of proper planning in Beijing. Most of them at first sight appear to be compact and dense but further analytical studies revealed that what appear to be dense actually are not as dense as to make a good case that could serve as the corner stone of sustainability and energy efficiency. This study considered several factors including floor area ratio (FAR), ground coverage (GSI), open space ratio (OSR) as indicators in analyzing urban compactness as a predictor of density. The findings suggest that these measures, influencing the density of residential sites under study, were much smaller in density than expected given their compact adjacencies. Further analysis revealed that several residential housing appear to support the notion of density in its compact layout but are actually compacted due to unregulated planning marred by lack of proper urban design standards, policies and guidelines specific to their urban context and condition.Keywords: Beijing, density, sustainability, urban compactness
Procedia PDF Downloads 4251445 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.
Procedia PDF Downloads 211444 Self-Organizing Maps for Credit Card Fraud Detection and Visualization
Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang
Abstract:
This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies
Procedia PDF Downloads 611443 Study on Natural Light Distribution Inside the Room by Using Sudare as an Outside Horizontal Blind in Tropical Country of Indonesia
Authors: Agus Hariyadi, Hiroatsu Fukuda
Abstract:
In tropical country like Indonesia, especially in Jakarta, most of the energy consumption on building is for the cooling system, the second one is from lighting electric consumption. One of the passive design strategy that can be done is optimizing the use of natural light from the sun. In this area, natural light is always available almost every day around the year. Natural light have many effect on building. It can reduce the need of electrical lighting but also increase the external load. Another thing that have to be considered in the use of natural light is the visual comfort from occupant inside the room. To optimize the effectiveness of natural light need some modification of façade design. By using external shading device, it can minimize the external load that introduces into the room, especially from direct solar radiation which is the 80 % of the external energy load that introduces into the building. It also can control the distribution of natural light inside the room and minimize glare in the perimeter zone of the room. One of the horizontal blind that can be used for that purpose is Sudare. It is traditional Japanese blind that have been used long time in Japanese traditional house especially in summer. In its original function, Sudare is used to prevent direct solar radiation but still introducing natural ventilation. It has some physical characteristics that can be utilize to optimize the effectiveness of natural light. In this research, different scale of Sudare will be simulated using EnergyPlus and DAYSIM simulation software. EnergyPlus is a whole building energy simulation program to model both energy consumption—for heating, cooling, ventilation, lighting, and plug and process loads—and water use in buildings, while DAYSIM is a validated, RADIANCE-based daylighting analysis software that models the annual amount of daylight in and around buildings. The modelling will be done in Ladybug and Honeybee plugin. These are two open source plugins for Grasshopper and Rhinoceros 3D that help explore and evaluate environmental performance which will directly be connected to EnergyPlus and DAYSIM engines. Using the same model will maintain the consistency of the same geometry used both in EnergyPlus and DAYSIM. The aims of this research is to find the best configuration of façade design which can reduce the external load from the outside of the building to minimize the need of energy for cooling system but maintain the natural light distribution inside the room to maximize the visual comfort for occupant and minimize the need of electrical energy consumption.Keywords: façade, natural light, blind, energy
Procedia PDF Downloads 3461442 The Reflexive Interaction in Group Formal Practices: The Question of Criteria and Instruments for the Character-Skills Evaluation
Authors: Sara Nosari
Abstract:
In the research field on adult education, the learning development project followed different itineraries: recently it has promoted adult transformation by practices focused on the reflexive oriented interaction. This perspective, that connects life stories and life-based methods, characterizes a transformative space between formal and informal education. Within this framework, in the Nursing Degree Courses of Turin University, it has been discussed and realized a formal reflexive path on the care work professional identity through group practices. This path compared the future care professionals with possible experiences staged by texts used with the function of a pre-tests: these texts, setting up real or believable professional situations, had the task to start a reflection on the different 'elements' of care work professional life (relationship, educational character of relationship, relationship between different care roles; or even human identity, aims and ultimate aim of care, …). The learning transformative aspect of this kind of experience-test is that it is impossible to anticipate the process or the conclusion of reflexion because they depend on two main conditions: the personal sensitivity and the specific situation. The narrated experience is not a device, it does not include any tricks to understand the answering advance; the text is not aimed at deepening the knowledge, but at being an active and creative force which takes the group to compare with problematic figures. In fact, the experience-text does not have the purpose to explain but to problematize: it creates a space of suspension to live for questioning, for discussing, for researching, for deciding. It creates a space 'open' and 'in connection' where each one, in comparing with others, has the possibility to build his/her position. In this space, everyone has to possibility to expose his/her own argumentations and to be aware of the others emerged points of view, aiming to research and find the own personal position. However, to define his/her position, it is necessary to learn to exercise character skills (conscientiousness, motivation, creativity, critical thinking, …): if these not-cognitive skills have an undisputed evidence, less evident is how to value them. The paper will reflect on the epistemological limits and possibility to 'measure' character skills, suggesting some evaluation criteria.Keywords: transformative learning, educational role, formal/informal education, character-skills
Procedia PDF Downloads 1951441 The Minimum Patch Size Scale for Seagrass Canopy Restoration
Authors: Aina Barcelona, Carolyn Oldham, Jordi Colomer, Teresa Serra
Abstract:
The loss of seagrass meadows worldwide is being tackled by formulating coastal restoration strategies. Seagrass loss results in a network of vegetated patches which are barely interconnected, and consequently, the ecological services they provide may be highly compromised. Hence, there is a need to optimize coastal management efforts in order to implement successful restoration strategies, not only through modifying the architecture of the canopies but also by gathering together information on the hydrodynamic conditions of the seabeds. To obtain information on the hydrodynamics within the patches of vegetation, this study deals with the scale analysis of the minimum lengths of patch management strategies that can be effectively used on. To this aim, a set of laboratory experiments were conducted in a laboratory flume where the plant densities, patch lengths, and hydrodynamic conditions were varied to discern the vegetated patch lengths that can provide optimal ecosystem services for canopy development. Two possible patch behaviours based on the turbulent kinetic energy (TKE) production were determined: one where plants do not interact with the flow and the other where plants interact with waves and produce TKE. Furthermore, this study determines the minimum patch lengths that can provide successful management restoration. A canopy will produce TKE, depending on its density, the length of the vegetated patch, and the wave velocities. Therefore, a vegetated patch will produce plant-wave interaction under high wave velocities when it presents large lengths and high canopy densities.Keywords: seagrass, minimum patch size, turbulent kinetic energy, oscillatory flow
Procedia PDF Downloads 1981440 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 881439 The Incidence of Maxillary Canine Ankylosis: A Single-Centre Analysis of 206 Canines Following Surgical Exposure and Orthodontic Alignment
Authors: Sidra Suleman, Maliha Suleman, Jinesh Shah
Abstract:
Maxillary canines play a crucial role in occlusion and aesthetics. Successful management of impacted canines requires early identification and intervention to prevent complications such as resorption of adjacent teeth and cystic changes. Although removal of the deciduous canine can encourage normal eruption of its successor, this is not always successful. Some patients may require surgical exposure and bonding of a gold chain to mobilise and align the canine, which can take up to 3 years. As this procedure has various risks, patients need to be appropriately consented to. Failure of such treatment commonly occurs due to inadequate anchorage or failure of the gold chain attachment, but in some cases, this is due to ankylosis. Aim: The aim of this study was to determine the incidence of ankylosis of unerupted maxillary ectopic canines following surgical exposure and orthodontic alignment at the Maxillofacial and Orthodontic Department, Royal Stoke University Hospital (RSUH), United Kingdom. Methodology: Patients treated from January 1, 2017, to December 31, 2019, were retrospectively studied. Electronic records with post-treatment follow-up at 3-6 months and 12-15 months were extracted and analysed. Patients were excluded based on three criteria, non-compliance with orthodontic treatment post-surgery, presence of canine transposition, and external orthodontic treatment. Sample: Overall, 159 suitable patients were selected from the 171 patients identified. Surgical exposure and gold chain bonding was carried out for a total of 206 maxillary canines, with the pattern of impaction being 159 (77.2 %) palatal, 46 (22.3%) buccal, and 1 (0.49%) in line of the arch. The sample consisted of 57 (35.8%) males and 102 (64.2%) females between the age range of 10 to 32 years, with the mean age being 15 years. The procedures were carried out under general anaesthesia for all but three patients, with two cases being repeats. Closed exposure was carried out for 189 (91.7%) canines. Results: The incidence of ankylosis from this study was 0.97%. In total, two patients had upper left canine ankylosis, which was identified at their 12-15 months orthodontic follow-up. Both patients were males, one having closed exposure at age 15 and the other having open exposure at age 19. Conclusions: Although this data shows that there is a low risk of ankylosis (0.97%), it highlights the difficulty in predicting which patients may be affected, and thus, a thorough pre-treatment assessment and careful observation during treatment is necessary. Future studies involving larger cohorts are warranted to further analyse factors affecting outcomes.Keywords: ankylosis, ectopic, maxillary canines, orthodontics
Procedia PDF Downloads 2111438 Architectural Robotics in Micro Living Spaces: An Approach to Enhancing Wellbeing
Authors: Timothy Antoniuk
Abstract:
This paper will demonstrate why the most successful and livable cities in the future will require multi-disciplinary designers to develop a deep understanding of peoples’ changing lifestyles, and why new generations of deeply integrated products, services and experiences need to be created. Disseminating research from the UNEP Creative Economy Reports and through a variety of other consumption and economic-based statistics, a compelling argument will be made that it is peoples’ living spaces that offer the easiest and most significant affordances for inducing positive changes to their wellbeing, and to a city’s economic and environmental prosperity. This idea, that leveraging happiness, wellbeing and prosperity through creating new concepts and typologies of ‘home’, puts people and their needs, wants, desires, aspirations and lifestyles at the beginning of the design process, not at the end, as so often occurs with current-day multi-unit housing construction. As an important part of the creative-reflective and statistical comparisons that are necessary for this on-going body of research and practice, Professor Antoniuk created the Micro Habitation Lab (mHabLab) in 2016. By focusing on testing the functional and economic feasibility of activating small spaces with different types of architectural robotics, a variety of movable, expandable and interactive objects have been hybridized and integrated into the architectural structure of the Lab. Allowing the team to test new ideas continually and accumulate thousands of points of feedback from everyday consumers, a series of on-going open houses is allowing the public-at-large to see, physically engage with, and give feedback on the items they find most and least valuable. This iterative approach of testing has exposed two key findings: Firstly, that there is a clear opportunity to improve the macro and micro functionality of small living spaces; and secondly, that allowing people to physically alter smaller elements of their living space lessens feelings of frustration and enhances feelings of pride and a deeper perception of “home”. Equally interesting to these findings is a grouping of new research questions that are being exposed which relate to: The duality of space; how people can be in two living spaces at one time; and how small living spaces is moving the Extended Home into the public realm.Keywords: architectural robotics, extended home, interactivity, micro living spaces
Procedia PDF Downloads 1751437 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 1741436 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms
Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak
Abstract:
Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.Keywords: joint inventory-location problem, facility location, NSGAII, MOSS
Procedia PDF Downloads 5271435 Hydrogen Sulfide Releasing Ibuprofen Derivative Can Protect Heart After Ischemia-Reperfusion
Authors: Virag Vass, Ilona Bereczki, Erzsebet Szabo, Nora Debreczeni, Aniko Borbas, Pal Herczegh, Arpad Tosaki
Abstract:
Hydrogen sulfide (H₂S) is a toxic gas, but it is produced by certain tissues in a small quantity. According to earlier studies, ibuprofen and H₂S has a protective effect against damaging heart tissue caused by ischemia-reperfusion. Recently, we have been investigating the effect of a new water-soluble H₂S releasing ibuprofen molecule administered after artificially generated ischemia-reperfusion on isolated rat hearts. The H₂S releasing property of the new ibuprofen derivative was investigated in vitro in medium derived from heart endothelial cell isolation at two concentrations. The ex vivo examinations were carried out on rat hearts. Rats were anesthetized with an intraperitoneal injection of ketamine, xylazine, and heparin. After thoracotomy, hearts were excised and placed into ice-cold perfusion buffer. Perfusion of hearts was conducted in Langendorff mode via the cannulated aorta. In our experiments, we studied the dose-effect of the H₂S releasing molecule in Langendorff-perfused hearts with the application of gradually increasing concentration of the compound (0- 20 µM). The H₂S releasing ibuprofen derivative was applied before the ischemia for 10 minutes. H₂S concentration was measured with an H₂S detecting electrochemical sensor from the coronary effluent solution. The 10 µM concentration was chosen for further experiments when the treatment with this solution was occurred after the ischemia. The release of H₂S is occurred by the hydrolyzing enzymes that are present in the heart endothelial cells. The protective effect of the new H₂S releasing ibuprofen molecule can be confirmed by the infarct sizes of hearts using the Triphenyl-tetrazolium chloride (TTC) staining method. Furthermore, we aimed to define the effect of the H₂S releasing ibuprofen derivative on autophagic and apoptotic processes in damaged hearts after investigating the molecular markers of these events by western blotting and immunohistochemistry techniques. Our further studies will include the examination of LC3I/II, p62, Beclin1, caspase-3, and other apoptotic molecules. We hope that confirming the protective effect of new H₂S releasing ibuprofen molecule will open a new possibility for the development of more effective cardioprotective agents with exerting fewer side effects. Acknowledgment: This study was supported by the grants of NKFIH- K-124719 and the European Union and the State of Hungary co- financed by the European Social Fund in the framework of GINOP- 2.3.2-15-2016-00043.Keywords: autophagy, hydrogen sulfide, ibuprofen, ischemia, reperfusion
Procedia PDF Downloads 1421434 An IoT-Enabled Crop Recommendation System Utilizing Message Queuing Telemetry Transport (MQTT) for Efficient Data Transmission to AI/ML Models
Authors: Prashansa Singh, Rohit Bajaj, Manjot Kaur
Abstract:
In the modern agricultural landscape, precision farming has emerged as a pivotal strategy for enhancing crop yield and optimizing resource utilization. This paper introduces an innovative Crop Recommendation System (CRS) that leverages the Internet of Things (IoT) technology and the Message Queuing Telemetry Transport (MQTT) protocol to collect critical environmental and soil data via sensors deployed across agricultural fields. The system is designed to address the challenges of real-time data acquisition, efficient data transmission, and dynamic crop recommendation through the application of advanced Artificial Intelligence (AI) and Machine Learning (ML) models. The CRS architecture encompasses a network of sensors that continuously monitor environmental parameters such as temperature, humidity, soil moisture, and nutrient levels. This sensor data is then transmitted to a central MQTT server, ensuring reliable and low-latency communication even in bandwidth-constrained scenarios typical of rural agricultural settings. Upon reaching the server, the data is processed and analyzed by AI/ML models trained to correlate specific environmental conditions with optimal crop choices and cultivation practices. These models consider historical crop performance data, current agricultural research, and real-time field conditions to generate tailored crop recommendations. This implementation gets 99% accuracy.Keywords: Iot, MQTT protocol, machine learning, sensor, publish, subscriber, agriculture, humidity
Procedia PDF Downloads 701433 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube
Authors: Cathal Merz, Gareth O’Donnell
Abstract:
Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.Keywords: neurovascular catheter, coil reinforced tube, buckling, three-point bend, tensile
Procedia PDF Downloads 1181432 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 691431 Nano-Filled Matrix Reinforced by Woven Carbon Fibers Used as a Sensor
Authors: K. Hamdi, Z. Aboura, W. Harizi, K. Khellil
Abstract:
Improving the electrical properties of organic matrix composites has been investigated in several studies. Thus, to extend the use of composites in more varied application, one of the actual barrier is their poor electrical conductivities. In the case of carbon fiber composites, organic matrix are in charge of the insulating properties of the resulting composite. However, studying the properties of continuous carbon fiber nano-filled composites is less investigated. This work tends to characterize the effect of carbon black nano-fillers on the properties of the woven carbon fiber composites. First of all, SEM observations were performed to localize the nano-particles. It showed that particles penetrated on the fiber zone (figure1). In fact, by reaching the fiber zone, the carbon black nano-fillers created network connectivity between fibers which means an easy pathway for the current. It explains the noticed improvement of the electrical conductivity of the composites by adding carbon black. This test was performed with the four points electrical circuit. It shows that electrical conductivity of 'neat' matrix composite passed from 80S/cm to 150S/cm by adding 9wt% of carbon black and to 250S/cm by adding 17wt% of the same nano-filler. Thanks to these results, the use of this composite as a strain gauge might be possible. By the way, the study of the influence of a mechanical excitation (flexion, tensile) on the electrical properties of the composite by recording the variance of an electrical current passing through the material during the mechanical testing is possible. Three different configuration were performed depending on the rate of carbon black used as nano-filler. These investigation could lead to develop an auto-instrumented material.Keywords: carbon fibers composites, nano-fillers, strain-sensors, auto-instrumented
Procedia PDF Downloads 4131430 On the Semantics and Pragmatics of 'Be Able To': Modality and Actualisation
Authors: Benoît Leclercq, Ilse Depraetere
Abstract:
The goal of this presentation is to shed new light on the semantics and pragmatics of be able to. It presents the results of a corpus analysis based on data from the BNC (British National Corpus), and discusses these results in light of a specific stance on the semantics-pragmatics interface taking into account recent developments. Be able to is often discussed in relation to can and could, all of which can be used to express ability. Such an onomasiological approach often results in the identification of usage constraints for each expression. In the case of be able to, it is the formal properties of the modal expression (unlike can and could, be able to has non-finite forms) that are in the foreground, and the modal expression is described as the verb that conveys future ability. Be able to is also argued to expressed actualised ability in the past (I was able/could to open the door). This presentation aims to provide a more accurate pragmatic-semantic profile of be able to, based on extensive data analysis and one that is embedded in a very explicit view on the semantics-pragmatics interface. A random sample of 3000 examples (1000 for each modal verb) extracted from the BNC was analysed to account for the following issues. First, the challenge is to identify the exact semantic range of be able to. The results show that, contrary to general assumption, be able to does not only express ability but it shares most of the root meanings usually associated with the possibility modals can and could. The data reveal that what is called opportunity is, in fact, the most frequent meaning of be able to. Second, attention will be given to the notion of actualisation. It is commonly argued that be able to is the preferred form when the residue actualises: (1) The only reason he was able to do that was because of the restriction (BNC, spoken) (2) It is only through my imaginative shuffling of the aces that we are able to stay ahead of the pack. (BNC, written) Although this notion has been studied in detail within formal semantic approaches, empirical data is crucially lacking and it is unclear whether actualisation constitutes a conventional (and distinguishing) property of be able to. The empirical analysis provides solid evidence that actualisation is indeed a conventional feature of the modal. Furthermore, the dataset reveals that be able to expresses actualised 'opportunities' and not actualised 'abilities'. In the final part of this paper, attention will be given to the theoretical implications of the empirical findings, and in particular to the following paradox: how can the same expression encode both modal meaning (non-factual) and actualisation (factual)? It will be argued that this largely depends on one's conception of the semantics-pragmatics interface, and that this need not be an issue when actualisation (unlike modality) is analysed as a generalised conversational implicature and thus is considered part of the conventional pragmatic layer of be able to.Keywords: Actualisation, Modality, Pragmatics, Semantics
Procedia PDF Downloads 1331429 Transdisciplinary Pedagogy: An Arts-Integrated Approach to Promote Authentic Science, Technology, Engineering, Arts, and Mathematics Education in Initial Teacher Education
Authors: Anne Marie Morrin
Abstract:
This paper will focus on the design, delivery and assessment of a transdisciplinary STEAM (Science, Technology, Engineering, Arts, and Mathematics) education initiative in a college of education in Ireland. The project explores a transdisciplinary approach to supporting STEAM education where the concepts, methodologies and assessments employed derive from visual art sessions within initial teacher education. The research will demonstrate that the STEAM Education approach is effective when visual art concepts and methods are placed at the core of the teaching and learning experience. Within this study, emphasis is placed on authentic collaboration and transdisciplinary pedagogical approaches with the STEAM subjects. The partners included a combination of teaching expertise in STEM and Visual Arts education, artists, in-service and pre-service teachers and children. The inclusion of all stakeholders mentioned moves towards a more authentic approach where transdisciplinary practice is at the core of the teaching and learning. Qualitative data was collected using a combination of questionnaires (focused and open-ended questions) and focus groups. In addition, the data was collected through video diaries where students reflected on their visual journals and transdisciplinary practice, which gave rich insight into participants' experiences and opinions on their learning. It was found that an effective program of STEAM education integration was informed by co-teaching (continuous professional development), which involved a commitment to adaptable and flexible approaches to teaching, learning, and assessment, as well as the importance of continuous reflection-in-action by all participants. The delivery of a transdisciplinary model of STEAM education was devised to reconceptualizatise how individual subject areas can develop essential skills and tackle critical issues (such as self-care and climate change) through data visualisation and technology. The success of the project can be attributed to the collaboration, which was inclusive, flexible and a willingness between various stakeholders to be involved in the design and implementation of the project from conception to completion. The case study approach taken is particularistic (focusing on the STEAM-ED project), descriptive (providing in-depth descriptions from varied and multiple perspectives), and heuristic (interpreting the participants’ experiences and what meaning they attributed to their experiences).Keywords: collaboration, transdisciplinary, STEAM, visual arts education
Procedia PDF Downloads 501428 Artificial Intelligence in Bioscience: The Next Frontier
Authors: Parthiban Srinivasan
Abstract:
With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction
Procedia PDF Downloads 3601427 Design of Robust and Intelligent Controller for Active Removal of Space Debris
Authors: Shabadini Sampath, Jinglang Feng
Abstract:
With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink
Procedia PDF Downloads 1331426 Reconsidering the Palaeo-Environmental Reconstruction of the Wet Zone of Sri Lanka: A Zooarchaeological Perspective
Authors: Kelum N. Manamendra-Arachchi, Kalangi Rodrigo
Abstract:
Bones, teeth, and shells have been acknowledged over the last two centuries as evidence of chronology, Palaeo-environment, and human activity. Faunal traces are valid evidence of past situations because they have properties that have not changed over long periods of time. Sri Lanka has been known as an Island, which has a diverse variation of prehistoric occupation among ecological zones. Defining the Paleoecology of the past societies has been an archaeological thought developed in the 1960s. It is mainly concerned with the reconstruction from available geological and biological evidence of past biota, populations, communities, landscapes, environments, and ecosystems. Sri Lanka has dealt with this subject and considerable research has been already undertaken. The fossil and material record of Sri Lanka’s Wet Zone tropical forests continues from c. 38,000–34,000 ybp. This early and persistent human fossil, technical, and cultural florescence, as well as a collection of well-preserved tropical-forest rock shelters with associated ' on-site ' Palaeoenvironmental records, makes Sri Lanka a central and unusual case study to determine the extent and strength of early human tropical forest encounters. Excavations carried out in prehistoric caves in the low country wet zone has shown that in the last 50,000 years, the temperature in the lowland rainforests has not exceeded 5 degrees. Based on Semnopithecus Priam (Gray Langur) remains unearned from wet zone prehistoric caves, it has been argued that periods of momentous climate changes during the LGM and Terminal Pleistocene/Early Holocene boundary, with a recognizable preference for semi-open ‘Intermediate’ rainforest or edges. Continuous Genus Acavus and Oligospira occupation along with uninterrupted horizontal pervasive of Canarium sp. (‘kekuna’ nut) have proven that temperatures in the lowland rain forests have not changed by at least 5 oC over the last 50,000 years. Site Catchment or Territorial analysis cannot be no longer defensible, due to time-distance based factors as well as optimal foraging theory failed as a consequences of prehistoric people were aware of the decrease in cost-benefit ratio and located sites, and generally played out a settlement strategy that minimized the ratio of energy expanded to energy produced.Keywords: palaeo-environment, prehistory, palaeo-ecology, zooarchaeology
Procedia PDF Downloads 1251425 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 1571424 Intensive Multidisciplinary Feeding Intervention for a Toddler with In-Utero Drug Exposure
Authors: Leandra Prempeh, Emily Malugen
Abstract:
Prenatal drug exposure can have a molecular impact on the hypothalamic and reward genes that regulate feeding behavior. This can impact feeding regulation, resulting in feeding difficulties and growth failure. This was potentially seen in “McKayla,” a 19- month old girl with a history of in-utero drug exposure, patent ductus arteriosus, and gastroesophageal reflux disease who presented for intensive day treatment feeding therapy. She was diagnosed with Avoidant Restrictive Food Intake Disorder, described as total food refusal and meeting 100% of her caloric needs from a gastrostomy tube. The primary goals during intensive feeding therapy were to increase her oral intake and decrease her reliance on supplementation with formula. Several behavioral antecedent manipulations were implemented to establish consistent responding and make progress towards treatment goals. This included multiple modified bolus placements (using underloaded and Nuk brush), reinforcement contingencies, and variety fading before stability was finally achieved. Following, increasing retention of bites then increasing volume and variety were goals targeted. From treatment onset to the last 3 days of treatment, McKayla's rate of rapid acceptance of bite presentations increased significantly from 33.33% to 93.13%, rapid swallowing went from 0.00% to 92.32%, and her percentage of inappropriate mealtime behavior and expels decreased from 58.33% and 100% to 2.31% and 7.68%, respectively. Overall, the treatment team successfully introduced and increased the bite size of 7 pureed foods, generalize the treatment to caregivers with high integrity, and began facilitating tube weaning. She was receiving about 33.42% of her needs by mouth at the time of discharge. Other nutritional concerns addressed during treatment included drinking a nutritionally complete drink out of an open cup and age appropriate growth. McKayla continued to have emesis almost daily, as was her baseline before starting treatment; however, the frequency during mealtime decreased. Overall, McKayla responded well to treatment. She had a very slow response to treatment and required a lot of antecedent manipulations to establish consistent responding. As the literature suggests, [drug]-exposed neonates, like McKayla, may be at increased risk for nutritional and growth challenges that may persist throughout development. This supports the need for longterm follow-up of infant growth.Keywords: behavioral intervention, feeding problems, in-utero drug exposure, intensive multidisciplinary intervention
Procedia PDF Downloads 681423 The Impact of Quality Cost on Revenue Sharing in Supply Chain Management
Authors: Fayza M. Obied-Allah
Abstract:
Customer’ needs, quality, and value creation while reducing costs through supply chain management provides challenges and opportunities for companies and researchers. In the light of these challenges, modern ideas must contribute to counter these challenges and exploit opportunities. Perhaps this paper will be one of these contributions. This paper discusses the impact of the quality cost on revenue sharing as a most important incentive to configure business networks. No doubt that the costs directly affect the size of income generated by a business network, so this paper investigates the impact of quality costs on business networks revenue, and their impact on the decision to participate the revenue among the companies in the supply chain. This paper develops the quality cost approach to align with the modern era, the developed model includes five categories besides the well-known four categories (namely prevention costs, appraisal costs, internal failure costs, and external failure costs), a new category has been developed in this research as a new vision of the relationship between quality costs and innovations of industry. This new category is Recycle Cost. This paper is organized into six sections, Section I shows quality costs overview in the supply chain. Section II discusses revenue sharing between the parties in supply chain. Section III investigates the impact of quality costs in revenue sharing decision between partners in supply chain. The fourth section includes survey study and presents statistical results. Section V discusses the results and shows future opportunities for research. Finally, Section VI summarizes the theoretical and practical results of this paper.Keywords: quality cost, recycle cost, revenue sharing, supply chain management
Procedia PDF Downloads 4491422 Ways to Sustaining Self-Care of Thai Community Women to Achieve Future Healthy Aging
Authors: Manee Arpanantikul, Pennapa Unsanit, Dolrat Rujiwatthanakorn, Aporacha Lumdubwong
Abstract:
In order to continuously perform self-care based on the sufficiency economy philosophy for the length of women’s lives is not easy. However, there are different ways that women can use to carry out self-care activities regularly. Some women individually perform self-care while others perform self-care in groups. Little is known about ways to sustaining self-care of women based on the fundamental principle of Thai culture. The purpose of this study was to investigate ways to sustaining self-care based on the sufficiency economy philosophy of Thai middle-aged women living in the community in order to achieve future healthy aging. This study employed a qualitative research design. Twenty women who were willing to participate in this study were recruited. Data collection were conducted through in-depth interviews with tape recording, doing field notes, and observation. All interviews were transcribed verbatim, and data were analyzed by using content analysis. The findings showed ways to sustaining self-care of Thai community women to achieve future healthy aging consisting of 7 themes: 1) having determination, 2) having a model, 3) developing a leader, 4) carrying on performing activities, 5) setting up rules, 6) building self-care culture, and 7) developing a self-care group/network. The findings of this study suggested that in order to achieve self-care sustainability women should get to know themselves, have intention and belief, together with having the power of community and support. Therefore, having self-care constantly will prevent disease and promote healthy in women’s lives.Keywords: qualitative research, sufficiency economy philosophy, Thai middle-aged women, ways to sustaining self-care
Procedia PDF Downloads 3761421 An End-to-end Piping and Instrumentation Diagram Information Recognition System
Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha
Abstract:
Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.Keywords: object recognition system, P&ID, symbol recognition, text recognition
Procedia PDF Downloads 1541420 Cybersecurity Assessment of Decentralized Autonomous Organizations in Smart Cities
Authors: Claire Biasco, Thaier Hayajneh
Abstract:
A smart city is the integration of digital technologies in urban environments to enhance the quality of life. Smart cities capture real-time information from devices, sensors, and network data to analyze and improve city functions such as traffic analysis, public safety, and environmental impacts. Current smart cities face controversy due to their reliance on real-time data tracking and surveillance. Internet of Things (IoT) devices and blockchain technology are converging to reshape smart city infrastructure away from its centralized model. Connecting IoT data to blockchain applications would create a peer-to-peer, decentralized model. Furthermore, blockchain technology powers the ability for IoT device data to shift from the ownership and control of centralized entities to individuals or communities with Decentralized Autonomous Organizations (DAOs). In the context of smart cities, DAOs can govern cyber-physical systems to have a greater influence over how urban services are being provided. This paper will explore how the core components of a smart city now apply to DAOs. We will also analyze different definitions of DAOs to determine their most important aspects in relation to smart cities. Both categorizations will provide a solid foundation to conduct a cybersecurity assessment of DAOs in smart cities. It will identify the benefits and risks of adopting DAOs as they currently operate. The paper will then provide several mitigation methods to combat cybersecurity risks of DAO integrations. Finally, we will give several insights into what challenges will be faced by DAO and blockchain spaces in the coming years before achieving a higher level of maturity.Keywords: blockchain, IoT, smart city, DAO
Procedia PDF Downloads 1241419 Quince Seed Mucilage (QSD)/ Multiwall Carbonano Tube Hybrid Hydrogels as Novel Controlled Drug Delivery Systems
Authors: Raouf Alizadeh, Kadijeh Hemmati
Abstract:
The aim of this study is to synthesize several series of hydrogels from combination of a natural based polymer (Quince seed mucilage QSD), a synthetic copolymer contained methoxy poly ethylene glycol -polycaprolactone (mPEG-PCL) in the presence of different amount of multi-walled carbon nanotube (f-MWNT). Mono epoxide functionalized mPEG (mP EG-EP) was synthesized and reacted with sodium azide in the presence of NH4Cl to afford mPEG- N3(-OH). Then ring opening polymerization (ROP) of ε–caprolactone (CL) in the presence of mPEG- N3(-OH) as initiator and Sn(Oct)2 as catalyst led to preparation of mPEG-PCL- N3(-OH ) which was grafted onto propagylated f-MWNT by the click reaction to obtain mPEG-PCL- f-MWNT (-OH ). In the presence of mPEG- N3(-Br) and mixture of NHS/DCC/ QSD, hybrid hydrogels were successfully synthesized. The copolymers and hydrogels were characterized using different techniques such as, scanning electron microscope (SEM) and thermogravimetric analysis (TGA). The gel content of hydrogels showed dependence on the weight ratio of QSD:mPEG-PCL:f-MWNT. The swelling behavior of the prepared hydrogels was also studied under variation of pH, immersion time, and temperature. According to the results, the swelling behavior of the prepared hydrogels showed significant dependence in the gel content, pH, immersion time and temperature. The highest swelling was observed at room temperature, in 60 min and at pH 8. The loading and in-vitro release of quercetin as a model drug were investigated at pH of 2.2 and 7.4, and the results showed that release rate at pH 7.4 was faster than that at pH 2.2. The total loading and release showed dependence on the network structure of hydrogels and were in the range of 65- 91%. In addition, the cytotoxicity and release kinetics of the prepared hydrogels were also investigated.Keywords: antioxidant, drug delivery, Quince Seed Mucilage(QSD), swelling behavior
Procedia PDF Downloads 3211418 Development of Mobile Application for Internship Program Management Using the Concept of Model View Controller (MVC) Pattern
Authors: Shutchapol Chopvitayakun
Abstract:
Nowadays, especially for the last 5 years, mobile devices, mobile applications and mobile users, through the deployment of wireless communication and mobile phone cellular network, all these components are growing significantly bigger and stronger. They are being integrated into each other to create multiple purposes and pervasive deployments into every business and non-business sector such as education, medicine, traveling, finance, real estate and many more. Objective of this study was to develop a mobile application for seniors or last-year students who enroll the internship program at each tertiary school (undergraduate school) and do onsite practice at real field sties, real organizations and real workspaces. During the internship session, all students as the interns are required to exercise, drilling and training onsite with specific locations and specific tasks or may be some assignments from their supervisor. Their work spaces are both private and government corporates and enterprises. This mobile application is developed under schema of a transactional processing system that enables users to keep daily work or practice log, monitor true working locations and ability to follow daily tasks of each trainee. Moreover, it provides useful guidance from each intern’s advisor, in case of emergency. Finally, it can summarize all transactional data then calculate each internship cumulated hours from the field practice session for each individual intern.Keywords: internship, mobile application, Android OS, smart phone devices, mobile transactional processing system, guidance and monitoring, tertiary education, senior students, model view controller (MVC)
Procedia PDF Downloads 3151417 Combustion Characteristics of Ionized Fuels for Battery System Safety
Authors: Hyeuk Ju Ko, Eui Ju Lee
Abstract:
Many electronic devices are powered by various rechargeable batteries such as lithium-ion today, but occasionally the batteries undergo thermal runaway and cause fire, explosion, and other hazards. If a battery fire should occur in an electronic device of vehicle and aircraft cabin, it is important to quickly extinguish the fire and cool the batteries to minimize safety risks. Attempts to minimize these risks have been carried out by many researchers but the number of study on the successful extinguishment is limited. Because most rechargeable batteries are operated on the ion state with electron during charge and discharge of electricity, and the reaction of this electrolyte has a big difference with normal combustion. Here, we focused on the effect of ions on reaction stability and pollutant emissions during combustion process. The other importance for understanding ionized fuel combustion could be found in high efficient and environment-friendly combustion technologies, which are used to be operated an extreme condition and hence results in unintended flame instability such as extinction and oscillation. The use of electromagnetic energy and non-equilibrium plasma is one of the way to solve the problems, but the application has been still limited because of lack of excited ion effects in the combustion process. Therefore, the understanding of ion role during combustion might be promised to the energy safety society including the battery safety. In this study, the effects of an ionized fuel on the flame stability and pollutant emissions were experimentally investigated in the hydrocarbon jet diffusion flames. The burner used in this experiment consisted of 7.5 mm diameter tube for fuel and the gaseous fuels were ionized with the ionizer (SUNJE, SPN-11). Methane (99.9% purity) and propane (commercial grade) were used as a fuel and open ambient air was used as an oxidizer. As the performance of ionizer used in the experiment was evaluated at first, ion densities of both propane and methane increased linearly with volume flow rate but the ion density of propane is slightly higher than that of methane. The results show that the overall flame stability and shape such as flame length has no significant difference even in the higher ion concentration. However, the fuel ionization affects to the pollutant emissions such as NOx and soot. NOx and CO emissions measured in post flame region decreased with increasing fuel ionization, especially at high fuel velocity, i.e. high ion density. TGA analysis and morphology of soot by TEM indicates that the fuel ionization makes soot to be matured.Keywords: battery fires, ionization, jet flames, stability, NOx and soot
Procedia PDF Downloads 186