Search results for: traffic counting device
414 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 120413 Analysis of Wheel Lock up Effects on Skidding Distance for Heavy Vehicles
Authors: Mahdieh Zamzamzadeh, Ahmad Abdullah Saifizul, Rahizar Ramli
Abstract:
The road accidents involving heavy vehicles have been showing worrying trends and, year after year, have increased the concern and awareness levels on safety of roads and transportations especially in developing countries like Malaysia. Statistics of road crashes continue to show that there are many contributing factors on the capability of a heavy vehicle to stop on safe distance and ultimately prevent traffic crashes. However, changes in the road condition due to weather variations and the vehicle dynamic specifications such as loading conditions and speed are the main risk factors because they will affect a heavy vehicle’s braking performance due to losing control and not being able to stop the vehicle, and in many cases will cause wheel lock up and accordingly skidding. Predicting heavy vehicle skidding distance is crucial for accident reconstruction and roadside safety engineers. Despite this, formal tools to study heavy vehicle skidding distance before stopping completely are totally limited, and most researchers have only considered braking distance in their studies. As a possible new tool, this work presents the iterative use of vehicle dynamic simulations to study heavy vehicle-roadway interaction in order to predict wheel lock up effects on skidding distance and safety. This research addresses the influence of the vehicle and road conditions on skidding distance after wheel lock up and presents a precise analysis of skidding phenomenon. The vehicle speed, vehicle loading condition and road friction parameters were all varied in a simulation-based analysis. In order to simulate the wheel lock up situation, a heavy vehicle model was constructed and simulated using multibody vehicle dynamics simulation software, and careful analysis was made on the conditions which caused the skidding distance to increase or decrease through a method using to predict skidding distance as part of braking distance. By applying many simulations, the results were quite revealing relation between the heavy vehicles loading condition, various sets of speed and road coefficient of friction and their interaction effect on the skidding distance. A number of results are presented which illustrate how the heavy vehicle overloading can seriously affect the skidding distance. Moreover, the results of simulation give the skid mark length, which is a necessary input data during accident reconstruction involving emergency braking.Keywords: accident reconstruction, Braking, heavy vehicle, skidding distance, skid mark, wheel lock up
Procedia PDF Downloads 498412 Healthcare Learning From Near Misses in Aviation Safety
Authors: Nick Woodier, Paul Sampson, Iain Moppett
Abstract:
Background: For years, healthcare across the world has recognised that patients are coming to harm from the very processes meant to help them. In response, healthcare tells itself that it needs to ‘be more like aviation.’ Aviation safety is highly regarded by those in healthcare and is seen as an exemplar. Specifically, healthcare is keen to learn from how aviation uses near misses to make their industry safer. Healthcare is rife with near misses; however, there has been little progress addressing them, with most research having focused on reporting. Addressing the factors that contribute to near misses will potentially help reduce the number of significant, harm patientsafety incidents. While the healthcare literature states the need to learn from aviation’s use of near misses, there is nothing that describes how best to do this. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from aviation to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how aviation, amongst other safety-critical industries, manages near misses. Results: Twelve aviation interviews contributed to the GT across passenger airlines, air traffic control, and bodies involved in policy, regulation, and investigation. The scoping review identified 83 articles across a range of safety-critical industries, but only seven focused on aviation. The GT identified that aviation interprets the term ‘near miss’ in different ways, commonly using it to specifically refer to near-miss air collisions, also known as Airproxes. Other types of near misses exist, such as health and safety, but the reporting of these and the safety climate associated with them is not as mature. Safety culture in aviation was regularly discussed, with evidence that culture varies depending on which part of the industry is being considered (e.g., civil vs. business aviation). Near misses are seen as just one part of an extensive safety management system, but processes to support their reporting and their analysis are not consistent. Their value alone is also questionable, with the challenge to long-held beliefs originating from the ‘common cause hypothesis.’ Conclusions: There is learning that healthcare can take from how parts of aviation manage and learn from near misses. For example, healthcare would benefit from a formal safety management system that currently does not exist. However, it may not be as simple as ‘healthcare should learn from aviation’ due to variation in safety maturity across the industry. Healthcare needs to clarify how to incorporate near misses into learning and whether allocating resources to them is of value; it was heard that catastrophes have led to greater improvements in safety in aviation.Keywords: aviation safety, patient safety, near miss, safety management systems
Procedia PDF Downloads 148411 The Impact of the Plagal Cadence on Nineteenth-Century Music
Authors: Jason Terry
Abstract:
Beginning in the mid-nineteenth century, hymns in the Anglo-American tradition often ended with the congregation singing ‘amen,’ most commonly set to a plagal cadence. While the popularity of this tradition is well-known still today, this research presents the origins of this custom. In 1861, Hymns Ancient & Modern deepened this convention by concluding each of its hymns with a published plagal-amen cadence. Subsequently, hymnals from a variety of denominations throughout Europe and the United States heavily adopted this practice. By the middle of the twentieth century the number of participants singing this cadence had suspiciously declined; however, it was not until the 1990s that the plagal-amen cadence all but disappeared from hymnals. Today, it is rare for songs to conclude with the plagal-amen cadence, although instrumentalists have continued to regularly play a plagal cadence underneath the singers’ sustained finalis. After examining a variety of music theory treatises, eighteenth-century newspaper articles, manuscripts & hymnals from the last five centuries, and conducting interviews with a number of scholars around the world, this study presents the context of the plagal-amen cadence through its history. The association of ‘amen’ and the plagal cadence was already being discussed during the late eighteenth century, and the plagal-amen cadence only grew in attractiveness from that time forward, most notably in the nineteenth and twentieth centuries. Throughout this research, the music of Thomas Tallis, primarily through his Preces and Responses, is reasonably shown to be the basis for the high status of the plagal-amen cadence in nineteenth- and twentieth-century society. Tallis’s immediate influence was felt among his contemporary English composers as well as posterity, all of whom were well-aware of his compositional styles and techniques. More importantly, however, was the revival of his music in nineteenth-century England, which had a greater impact on the plagal-amen tradition. With his historical title as the father of English cathedral music, Tallis was favored by the supporters of the Oxford Movement. Thus, with society’s view of Tallis, the simple IV–I cadence he chose to pair with ‘amen’ attained a much greater worth in the history of Western music. A musical device such as the once-revered plagal-amen cadence deserves to be studied and understood in a more factual light than has thus far been available to contemporary scholars.Keywords: amen cadence, Plagal-amen cadence, singing hymns with amen, Thomas Tallis
Procedia PDF Downloads 233410 Construction of Ovarian Cancer-on-Chip Model by 3D Bioprinting and Microfluidic Techniques
Authors: Zakaria Baka, Halima Alem
Abstract:
Cancer is a major worldwide health problem that has caused around ten million deaths in 2020. In addition, efforts to develop new anti-cancer drugs still face a high failure rate. This is partly due to the lack of preclinical models that recapitulate in-vivo drug responses. Indeed conventional cell culture approach (known as 2D cell culture) is far from reproducing the complex, dynamic and three-dimensional environment of tumors. To set up more in-vivo-like cancer models, 3D bioprinting seems to be a promising technology due to its ability to achieve 3D scaffolds containing different cell types with controlled distribution and precise architecture. Moreover, the introduction of microfluidic technology makes it possible to simulate in-vivo dynamic conditions through the so-called “cancer-on-chip” platforms. Whereas several cancer types have been modeled through the cancer-on-chip approach, such as lung cancer and breast cancer, only a few works describing ovarian cancer models have been described. The aim of this work is to combine 3D bioprinting and microfluidic technics with setting up a 3D dynamic model of ovarian cancer. In the first phase, alginate-gelatin hydrogel containing SKOV3 cells was used to achieve tumor-like structures through an extrusion-based bioprinter. The desired form of the tumor-like mass was first designed on 3D CAD software. The hydrogel composition was then optimized for ensuring good and reproducible printability. Cell viability in the bioprinted structures was assessed using Live/Dead assay and WST1 assay. In the second phase, these bioprinted structures will be included in a microfluidic device that allows simultaneous testing of different drug concentrations. This microfluidic dispositive was first designed through computational fluid dynamics (CFD) simulations for fixing its precise dimensions. It was then be manufactured through a molding method based on a 3D printed template. To confirm the results of CFD simulations, doxorubicin (DOX) solutions were perfused through the dispositive and DOX concentration in each culture chamber was determined. Once completely characterized, this model will be used to assess the efficacy of anti-cancer nanoparticles developed in the Jean Lamour institute.Keywords: 3D bioprinting, ovarian cancer, cancer-on-chip models, microfluidic techniques
Procedia PDF Downloads 196409 The Examination of Parents’ Perceptions and Motivations Regarding Type 1 Diabetes Management Technologies
Authors: Maria Dora Horvath, Norbert Buzas, Zsanett Tesch
Abstract:
Diabetes management poses many unique challenges for children and their parents. The use of a diabetes management device should not be one of these challenges as the purpose of these devices is to make the management more convenient. The objective of our study was to examine how demographical, psychological and diabetes-related factors determine the choices parents make regarding their child’s diabetes management technologies and how they perceive advanced devices. We conducted the study using an online questionnaire with 318 parents (mostly mothers). The questions of the survey were about demographical, diabetes-related and psychological factors (diabetes management problems, diabetes management competence). In addition, we asked the parents opinions about advanced diabetes management devices. We expanded our data with semi-structured in-depth interviews. 61 % of the participants Self-Monitored Blood Glucose (SMBG), and 39 % used a Continuous Glucose Monitoring System (CGM). Considering insulin administration, 58 % used Multiple Daily Insulin Injections (MDII) and 42 % used Continuous Subcutaneous Insulin Infusion (CSII). Parents who used diverse combinations of diabetes management devices showed significant differences in age (parents’ and child’s), the monthly cost of diabetes, the duration of diabetes, the highest level of education and average monthly household income. CGM users perceived diabetes management problems significantly more severe than SMBG users and CSII users felt significantly more competent in diabetes management than MDII users. Avoiding CGM use due to lack of financial resources was determined by diagnosis duration. While avoiding its use by the cause of the child rejecting, it was determined by the child’s age and diabetes competence. Using MDII instead of CSII because of the child’s rejection was determined by the monthly cost of diabetes and child’s age. We conducted a complex empirical study in which we examined perceptions and experiences of advanced and less advanced diabetes management technologies comprehensively. Our study highlights the factors that fundamentally influence parents’ motivations and choices about diabetes management technologies. These results could contribute to developing diabetes management technologies more suitable for children living with type 1 diabetes and their parents.Keywords: advanced diabetes management technologies, children living with type 1 diabetes, diabetes management, motivation, parents
Procedia PDF Downloads 135408 Bridging Healthcare Information Systems and Customer Relationship Management for Effective Pandemic Response
Authors: Sharda Kumari
Abstract:
As the Covid-19 pandemic continues to leave its mark on the global business landscape, companies have had to adapt to new realities and find ways to sustain their operations amid social distancing measures, government restrictions, and heightened public health concerns. This unprecedented situation has placed considerable stress on both employees and employers, underscoring the need for innovative approaches to manage the risks associated with Covid-19 transmission in the workplace. In response to these challenges, the pandemic has accelerated the adoption of digital technologies, with an increasing preference for remote interactions and virtual collaboration. Customer relationship management (CRM) systems have risen to prominence as a vital resource for organizations navigating the post-pandemic world, providing a range of benefits that include acquiring new customers, generating insightful consumer data, enhancing customer relationships, and growing market share. In the context of pandemic management, CRM systems offer three primary advantages: (1) integration features that streamline operations and reduce the need for multiple, costly software systems; (2) worldwide accessibility from any internet-enabled device, facilitating efficient remote workforce management during a pandemic; and (3) the capacity for rapid adaptation to changing business conditions, given that most CRM platforms boast a wide array of remotely deployable business growth solutions, a critical attribute when dealing with a dispersed workforce in a pandemic-impacted environment. These advantages highlight the pivotal role of CRM systems in helping organizations remain resilient and adaptive in the face of ongoing global challenges.Keywords: healthcare, CRM, customer relationship management, customer experience, digital transformation, pandemic response, patient monitoring, patient management, healthcare automation, electronic health record, patient billing, healthcare information systems, remote workforce, virtual collaboration, resilience, adaptable business models, integration features, CRM in healthcare, telehealth, pandemic management
Procedia PDF Downloads 101407 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms
Authors: Seulki Lee, Seoung Bum Kim
Abstract:
Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process
Procedia PDF Downloads 299406 A Study on an Evacuation Test to Measure Delay Time in Using an Evacuation Elevator
Authors: Kyungsuk Cho, Seungun Chae, Jihun Choi
Abstract:
Elevators are examined as one of evacuation methods in super-tall buildings. However, data on the use of elevators for evacuation at a fire are extremely scarce. Therefore, a test to measure delay time in using an evacuation elevator was conducted. In the test, time taken to get on and get off an elevator was measured and the case in which people gave up boarding when the capacity of the elevator was exceeded was also taken into consideration. 170 men and women participated in the test, 130 of whom were young people (20 ~ 50 years old) and 40 were senior citizens (over 60 years old). The capacity of the elevator was 25 people and it travelled between the 2nd and 4th floors. A video recording device was used to analyze the test. An elevator at an ordinary building, not a super-tall building, was used in the test to measure delay time in getting on and getting off an elevator. In order to minimize interference from other elements, elevator platforms on the 2nd and 4th floors were partitioned off. The elevator travelled between the 2nd and 4th floors where people got on and off. If less than 20 people got on the elevator which was empty, the data were excluded. If the elevator carrying 10 passengers stopped and less than 10 new passengers got on the elevator, the data were excluded. Getting-on an empty elevator was observed 49 times. The average number of passengers was 23.7, it took 14.98 seconds for the passengers to get on the empty elevator and the load factor was 1.67 N/s. It took the passengers, whose average number was 23.7, 10.84 seconds to get off the elevator and the unload factor was 2.33 N/s. When an elevator’s capacity is exceeded, the excessive number of people should get off. Time taken for it and the probability of the case were measure in the test. 37% of the times of boarding experienced excessive number of people. As the number of people who gave up boarding increased, the load factor of the ride decreased. When 1 person gave up boarding, the load factor was 1.55 N/s. The case was observed 10 times, which was 12.7% of the total. When 2 people gave up boarding, the load factor was 1.15 N/s. The case was observed 7 times, which was 8.9% of the total. When 3 people gave up boarding, the load factor was 1.26 N/s. The case was observed 4 times, which was 5.1% of the total. When 4 people gave up boarding, the load factor was 1.03 N/s. The case was observed 5 times, which was 6.3% of the total. Getting-on and getting-off time data for people who can walk freely were obtained from the test. In addition, quantitative results were obtained from the relation between the number of people giving up boarding and time taken for getting on. This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-16-02-KICT).Keywords: evacuation elevator, super tall buildings, evacuees, delay time
Procedia PDF Downloads 177405 Possibilities of Postmortem CT to Detection of Gas Accumulations in the Vessels of Dead Newborns with Congenital Sepsis
Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh
Abstract:
It is well known that the gas formed as a result of postmortem decomposition of tissues can be detected already 24-48 hours after death. In addition, the conditions of keeping and storage of the corpse (temperature and humidity of the environment) significantly determine the rate of occurrence and development of posthumous changes. The presence of sepsis is accompanied by faster postmortem decomposition and decay of the organs and tissues of the body. The presence of gas in the vessels and cavities can be revealed fully at postmortem CT. Radiologists must certainly report on the detection of intraorganic or intravascular gas, wich was detected at postmortem CT, to forensic experts or pathologists before the autopsy. This gas can not be detected during autopsy, but it can be very important for establishing a diagnosis. To explore the possibility of postmortem CT for the evaluation of gas accumulations in the newborns' vessels, who died from congenital sepsis. Researched of 44 newborns bodies (25 male and 19 female sex, at the age from 6 hours to 27 days) after 6 - 12 hours of death. The bodies were stored in the refrigerator at a temperature of +4°C in the supine position. Grouped 12 bodies of newborns that died from congenital sepsis. The control group consisted of 32 bodies of newborns that died without signs of sepsis. Postmortem CT examination was performed at the GEMINI TF TOF16 device, before the autopsy. The localizations of gas accumulations in the vessels were determined on the CT tomograms. The sepsis diagnosis was on the basis of clinical and laboratory data and autopsy results. Gases in the vessels were detected in 33.3% of cases in the group with sepsis, and in the control group - in 34.4%. A group with sepsis most often the gas localized in the heart and liver vessels - 50% each, of observations number with the detected gas in the vessels. In the heart cavities, aorta and mesenteric vessels - 25% each. In control most often gas was detected in the liver (63.6%) and abdominal cavity (54.5%) vessels. In 45.5% the gas localized in the cavities, and in 36.4% in the vessels of the heart. In the cerebral vessels and in the aorta gas was detected in 27.3% and 9.1%, respectively. Postmortem CT has high diagnostic capabilities to detect free gas in vessels. Postmortem changes in newborns that died from sepsis do not affect intravascular gas production within 6-12 hours. Radiation methods should be used as a supplement to the autopsy, including as a kind of ‘guide’, with the indication to the forensic medical expert of certain changes identified during CT studies, for better definition of pathological processes during the autopsy. Postmortem CT can be recommend as a first stage of autopsy.Keywords: congenital sepsis, gas, newborn, postmortem CT
Procedia PDF Downloads 146404 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions
Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin
Abstract:
Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length
Procedia PDF Downloads 390403 Exploring the In-Between: An Examination of the Contextual Factors That Impact How Young Children Come to Value and Use the Visual Arts in Their Learning and Lives
Authors: S. Probine
Abstract:
The visual arts have been proven to be a central means through which young children can communicate their ideas, reflect on experience, and construct new knowledge. Despite this, perceptions of, and the degree to which the visual arts are valued within education, vary widely within political, educational, community and family contexts. These differing perceptions informed my doctoral research project, which explored the contextual factors that affect how young children come to value and use the visual arts in their lives and learning. The qualitative methodology of narrative inquiry with inclusion of arts-based methods was most appropriate for this inquiry. Using a sociocultural framework, the stories collected were analysed through the sociocultural theories of Lev Vygotsky as well as the work of Urie Bronfenbrenner, together with postmodern theories about identity formation. The use of arts-based methods such as teacher’s reflective art journals and the collection of images by child participants and their parent/caregivers allowed the research participants to have a significant role in the research. Three early childhood settings at which the visual arts were deeply valued as a meaning-making device in children’s learning, were purposively selected to be involved in the research. At each setting, the study found a unique and complex web of influences and interconnections, which shaped how children utilised the visual arts to mediate their thinking. Although the teachers' practices at all three centres were influenced by sociocultural theories, each settings' interpretations of these theories were unique and resulted in innovative interpretations of the role of the teacher in supporting visual arts learning. These practices had a significant impact on children’s experiences of the visual arts. For many of the children involved in this study, visual art was the primary means through which they learned. The children in this study used visual art to represent their experiences, relationships, to explore working theories, their interests (including those related to popular culture), to make sense of their own and other cultures, and to enrich their imaginative play. This research demonstrates that teachers have fundamental roles in fostering and disseminating the importance of the visual arts within their educational communities.Keywords: arts-based methods, early childhood education, teacher's visual arts pedagogies, visual arts
Procedia PDF Downloads 139402 Copper Phthalocyanine Nanostructures: A Potential Material for Field Emission Display
Authors: Uttam Kumar Ghorai, Madhupriya Samanta, Subhajit Saha, Swati Das, Nilesh Mazumder, Kalyan Kumar Chattopadhyay
Abstract:
Organic semiconductors have gained potential interest in the last few decades for their significant contributions in the various fields such as solar cell, non-volatile memory devices, field effect transistors and light emitting diodes etc. The most important advantages of using organic materials are mechanically flexible, light weight and low temperature depositing techniques. Recently with the advancement of nanoscience and technology, one dimensional organic and inorganic nanostructures such as nanowires, nanorods, nanotubes have gained tremendous interests due to their very high aspect ratio and large surface area for electron transport etc. Among them, self-assembled organic nanostructures like Copper, Zinc Phthalocyanine have shown good transport property and thermal stability due to their π conjugated bonds and π-π stacking respectively. Field emission properties of inorganic and carbon based nanostructures are reported in literatures mostly. But there are few reports in case of cold cathode emission characteristics of organic semiconductor nanostructures. In this work, the authors report the field emission characteristics of chemically and physically synthesized Copper Phthalocyanine (CuPc) nanostructures such as nanowires, nanotubes and nanotips. The as prepared samples were characterized by X-Ray diffraction (XRD), Ultra Violet Visible Spectrometer (UV-Vis), Fourier Transform Infra-red Spectroscopy (FTIR), and Field Emission Scanning Electron Microscope (FESEM) and Transmission Electron Microscope (TEM). The field emission characteristics were measured in our home designed field emission set up. The registered turn-on field and local field enhancement factor are found to be less than 5 V/μm and greater than 1000 respectively. The field emission behaviour is also stable for 200 minute. The experimental results are further verified by theoretically using by a finite displacement method as implemented in ANSYS Maxwell simulation package. The obtained results strongly indicate CuPc nanostructures to be the potential candidate as an electron emitter for field emission based display device applications.Keywords: organic semiconductor, phthalocyanine, nanowires, nanotubes, field emission
Procedia PDF Downloads 501401 Traumatic Brain Injury in Cameroon: A Prospective Observational Study in a Level 1 Trauma Centre
Authors: Franklin Chu Buh, Irene Ule Ngole Sumbele, Andrew I. R. Maas, Mathieu Motah, Jogi V. Pattisapu, Eric Youm, Basil Kum Meh, Firas H. Kobeissy, Kevin W. Wang, Peter J. A. Hutchinson, Germain Sotoing Taiwe
Abstract:
Introduction: Studying TBI characteristics and their relation to outcomes can identify initiatives to improve TBI prevention and care. The objective of this study was to define the features and outcomes of TBI patients seen over a 1-year period in a level-I trauma center in Cameroon. Methods: Data on demographics, causes, injury mechanisms, clinical aspects, and discharge status were prospectively collected over a period of 12 months. The Glasgow Outcome Scale-Extended (GOSE) and the Quality of Life Questionnaire after Brain Injury (QoLIBRI) were used to evaluate outcomes 6-months after TBI. Categorical variables were described as frequencies and percentages. Comparisons between 2 categorical variables were done using Pearson's Chi-square test or Fisher's exact test. Results: A total of 160 TBI patients participated in the study. The age group 15-45 years (78%; 125) was most represented. Males were more affected (90%; 144). Low educational level was recorded in 122 (76%) cases. Road traffic incidents (RTI) were the main cause of TBI (85%), with professional bike riders being frequently involved (27%, 43/160). Assaults (7.5%) and falls (2.5%) represent the second and third most common causes of TBI in Cameroon, respectively. Only 15 patients were transported to the hospital by ambulance, and 14 of these were from a referring hospital. CT-imaging was performed in 78% (125/160) of cases intracranial traumatic abnormality was identified in 77/125 (64%) cases. Financial constraints were the main reason for not performing a CT scan on 35 patients. A total of 46 (33%) patients were discharged against medical advice (DAMA) due to financial constraints. Mortality was 14% (22/160) but disproportionately high in patients with severe TBI (46%). DAMA had poor outcomes with QoLIBRI. Only 4 patients received post-injury physiotherapy services. Conclusion: TBI in Cameroon mainly results from RTIs and commonly affects young adult males, and low educational or socioeconomic status and commercial bike riding appear to be predisposing factors. Lack of pre-hospital care, financial constraints limiting both CT-scanning and medical care, and lack of acute physiotherapy services likely influenced care and outcomes adversely.Keywords: characteristics, traumatic brain injury, outcome, disparities in care, prospective study
Procedia PDF Downloads 123400 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation
Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier
Abstract:
Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet
Procedia PDF Downloads 270399 Selecting the Best Risk Exposure to Assess Collision Risks in Container Terminals
Authors: Mohammad Ali Hasanzadeh, Thierry Van Elslander, Eddy Van De Voorde
Abstract:
About 90 percent of world merchandise trade by volume being carried by sea. Maritime transport remains as back bone behind the international trade and globalization meanwhile all seaborne goods need using at least two ports as origin and destination. Amid seaborne traded cargos, container traffic is a prosperous market with about 16% in terms of volume. Albeit containerized cargos are less in terms of tonnage but, containers carry the highest value cargos amongst all. That is why efficient handling of containers in ports is very important. Accidents are the foremost causes that lead to port inefficiency and a surge in total transport cost. Having different port safety management systems (PSMS) in place, statistics on port accidents show that numerous accidents occur in ports. Some of them claim peoples’ life; others damage goods, vessels, port equipment and/or the environment. Several accident investigation illustrate that the most common accidents take place throughout transport operation, it sometimes accounts for 68.6% of all events, therefore providing a safer workplace depends on reducing collision risk. In order to quantify risks at the port area different variables can be used as exposure measurement. One of the main motives for defining and using exposure in studies related to infrastructure is to account for the differences in intensity of use, so as to make comparisons meaningful. In various researches related to handling containers in ports and intermodal terminals, different risk exposures and also the likelihood of each event have been selected. Vehicle collision within the port area (10-7 per kilometer of vehicle distance travelled) and dropping containers from cranes, forklift trucks, or rail mounted gantries (1 x 10-5 per lift) are some examples. According to the objective of the current research, three categories of accidents selected for collision risk assessment; fall of container during ship to shore operation, dropping container during transfer operation and collision between vehicles and objects within terminal area. Later on various consequences, exposure and probability identified for each accident. Hence, reducing collision risks profoundly rely on picking the right risk exposures and probability of selected accidents, to prevent collision accidents in container terminals and in the framework of risk calculations, such risk exposures and probabilities can be useful in assessing the effectiveness of safety programs in ports.Keywords: container terminal, collision, seaborne trade, risk exposure, risk probability
Procedia PDF Downloads 373398 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building
Authors: Zehra Aybike Kılıç, Alpin Köknel Yener
Abstract:
Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort
Procedia PDF Downloads 176397 Preliminary Short-Term Results of a Population of Patients Treated with Mitraclip Therapy: One Center Experience
Authors: Rossana Taravella, Gilberto M. Cellura, Giuseppe Cirrincione, Salvatore Asciutto, Marco Caruso, Massimo Benedetto, Renato Ciofalo, Giuliana Pace, Salvatore Novo
Abstract:
Objectives: This retrospective analysis sought to evaluate 1-month outcomes and therapy effectiveness of a population of patients treated with MitraClip therapy. We describe in this article the preliminary results of primary effectiveness endpoint. Background: Percutaneous Mitral Repair is being developed to treat severe mitral regurgitation (MR), with increasing real-world cases of functional MR (FMR). In the EVEREST (Endovascular Valve Edge-to-Edge Repair Study)II trial, the percutaneous device showed superior safety but less reduction in MR at 1year. 4-year outcomes from EVEREST II trial showed no difference in the prevalence of moderate-severe and severe MR or mortality at 4years between surgical mitral repair and percutaneous approach. Methods: We analysed retrospectively collected data from one center experience in Italy enrolled from January 2011 to December 2016. The study included 62 patients [mean age 74±11years, 43 men (69%)] with MR of at least grade3+. Most of the patients had functional MR, were in New York Heart Association (NYHA) functional class III or IV, with a large portion (78%) of mild-to-moderate Tricuspid Regurgitation (TR). One or more clips were implanted in 67 procedures (62 patients). Results and Conclusions: Severity of MR was reduced in all successfully treated patients,54(90%) were discharged with MR≤2+ (primary effectiveness endpoint). Clinical 1-month follow-up data showed an improvement in NYHA functional class (42 patients (70%) in NYHA class I-II). 60 of 62 (97 %) successfully treated patients were free from death and mitral valve surgery at 1-month follow-up. MitraClip therapy reduces functional MR with acute MR reduction to <2+ in the great majority of patients, with a large freedom from death, surgery or recurrent MR in a great portion of patients.Keywords: MitraClip, mitral regurgitation, heart valves, catheter-based therapy
Procedia PDF Downloads 295396 Radiation Protection and Licensing for an Experimental Fusion Facility: The Italian and European Approaches
Authors: S. Sandri, G. M. Contessa, C. Poggi
Abstract:
An experimental nuclear fusion device could be seen as a step toward the development of the future nuclear fusion power plant. If compared with other possible solutions to the energy problem, nuclear fusion has advantages that ensure sustainability and security. In particular considering the radioactivity and the radioactive waste produced, in a nuclear fusion plant the component materials could be selected in order to limit the decay period, making it possible the recycling in a new reactor after about 100 years from the beginning of the decommissioning. To achieve this and other pertinent goals many experimental machines have been developed and operated worldwide in the last decades, underlining that radiation protection and workers exposure are critical aspects of these facilities due to the high flux, high energy neutrons produced in the fusion reactions. Direct radiation, material activation, tritium diffusion and other related issues pose a real challenge to the demonstration that these devices are safer than the nuclear fission facilities. In Italy, a limited number of fusion facilities have been constructed and operated since 30 years ago, mainly at the ENEA Frascati Center, and the radiation protection approach, addressed by the national licensing requirements, shows that it is not always easy to respect the constraints for the workers' exposure to ionizing radiation. In the current analysis, the main radiation protection issues encountered in the Italian Fusion facilities are considered and discussed, and the technical and legal requirements are described. The licensing process for these kinds of devices is outlined and compared with that of other European countries. The following aspects are considered throughout the current study: i) description of the installation, plant and systems, ii) suitability of the area, buildings, and structures, iii) radioprotection structures and organization, iv) exposure of personnel, v) accident analysis and relevant radiological consequences, vi) radioactive wastes assessment and management. In conclusion, the analysis points out the needing of a special attention to the radiological exposure of the workers in order to demonstrate at least the same level of safety as that reached at the nuclear fission facilities.Keywords: fusion facilities, high energy neutrons, licensing process, radiation protection
Procedia PDF Downloads 352395 Using GIS and AHP Model to Explore the Parking Problem in Khomeinishahr
Authors: Davood Vatankhah, Reza Mokhtari Malekabadi, Mohsen Saghaei
Abstract:
Function of urban transportation systems depends on the existence of the required infrastructures, appropriate placement of different components, and the cooperation of these components with each other. Establishing various neighboring parking spaces in city neighborhood in order to prevent long-term and inappropriate parking of cars in the allies is one of the most effective operations in reducing the crowding and density of the neighborhoods. Every place with a certain application attracts a number of daily travels which happen throughout the city. A large percentage of the people visiting these places go to these travels by their own cars; therefore, they need a space to park their cars. The amount of this need depends on the usage function and travel demand of the place. The study aims at investigating the spatial distribution of the public parking spaces, determining the effective factors in locating, and their combination in GIS environment in Khomeinishahr of Isfahan city. Ultimately, the study intends to create an appropriate pattern for locating parking spaces, determining the request for parking spaces of the traffic areas, choosing the proper places for providing the required public parking spaces, and also proposing new spots in order to promote quality and quantity aspects of the city in terms of enjoying public parking spaces. Regarding the method, the study is based on applied purpose and regarding nature, it is analytic-descriptive. The population of the study includes people of the center of Khomeinishahr which is located on Northwest of Isfahan having about 5000 hectares of geographic area and the population of 241318 people are in the center of Komeinishahr. In order to determine the sample size, Cochran formula was used and according to the population of 26483 people of the studied area, 231 questionnaires were used. Data analysis was carried out by usage of SPSS software and after estimating the required space for parking spaces, initially, the effective criteria in locating the public parking spaces are weighted by the usage of Analytic Hierarchical Process in the Arc GIS software. Then, appropriate places for establishing parking spaces were determined by fuzzy method of Order Weighted Average (OWA). The results indicated that locating of parking spaces in Khomeinishahr have not been carried out appropriately and per capita of the parking spaces is not desirable in relation to the population and request; therefore, in addition to the present parking lots, 1434 parking lots are needed in the area of the study for each day; therefore, there is not a logical proportion between parking request and the number of parking lots in Khomeinishahr.Keywords: GIS, locating, parking, khomeinishahr
Procedia PDF Downloads 308394 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids
Procedia PDF Downloads 416393 The Covid Pandemic at a Level III Trauma Center: Challenges in the Management of the Spine Trauma.
Authors: Joana PaScoa Pinheiro, David Goncalves Ferreira, Filipe Ramos, Joaquim Soares Do Brito, Samuel Martins, Marco Sarmento
Abstract:
Introduction: The SARS-CoV-2 (COVID-19) pandemic was identified in January 2020 in China, in the city of Wuhan. The increase in the number of cases over the following months was responsible for the restructuring of hospitals and departments in order to accommodate admissions related to COVID-19. Essential services, such as trauma, had to readapt to maintain their functionality and thus guarantee quick and safe access in case of an emergency. Objectives: This study describes the impact of COVID-19 on a Level III Trauma Center and particularly on the clinical management of hospitalized patients with spine injuries. Study Design & Methods: This is a retrospective cohort study whose results were obtained through the medical records of patients with spine injuries who underwent surgical intervention in the years 2019 and 2020 (period from March 1st to December 31st). A comparison between the two groups was made. In the study patients with injuries in the context of trauma were included who underwent surgery in the periods previously described. Patients hospitalized with a spine injury in a non-traumatic context and/or were not surgically treated were excluded. Results: In total, 137 patients underwent trauma spine surgery of which 71 in 2019 (51.8%) were without significant differences in intergroup comparisons. The most frequent injury mechanism in 2019 was motor vehicle crash (47.9%) compared to 2020 which was of a person falling from a height between 2-4 meters (37.9%). Cervical trauma was reported to be the most frequent spine injury in both years. There was a significant decrease in the need for intensive care in 2020, 51.4% vs 30.3%, p = .015 and the number of complications was also lower in 2020 (1.35% vs 0.98%), including the number of deaths, being the difference marginally significant. There were no significant differences regarding time for presentation to surgery or in the total days of hospitalization. Conclusions: The restructuring made in the trauma unit at a Level III Trauma Center in the context of the current COVID-19 pandemic was effective, with no significant differences between the years of 2019 vs 2020 when compared with the time for presentation to surgery or the number of days of hospitalization. It was also found that lockdown rules in 2020 were probably responsible for the decrease in the number of road traffic accidents, which justifies a significant decrease in the need for intensive care as well as in the number of complications in patients hospitalized in the context of spine trauma.Keywords: trauma, spine, impact, covid-19
Procedia PDF Downloads 256392 Geographical Location and the Global Airline Industry: A Delphi Study into the Future of Home Base Requirements
Authors: Darren J. Ellis
Abstract:
This paper investigates the key industry-level consequences and future prospects for the global airline industry of the requirement for airlines to have a home base. This industry context results in geographical location playing a central role in determining how and where international airlines can operate, and the extent to which their international networks can develop. Data from a five stage mixed-methods Delphi study into the global airline industry’s likely future trajectory conducted in 2013 and 2014 are utilized to better understand the likelihood and consequences of home base requirements changing in future. Expert views and forecasts were collected to gauge core industry trends over a ten year timeframe. Attempts to change or bypass this industry requirement have not been successful to date outside of the European single air market. Europe remains the only prominent exception to the general rule in this regard. Most of the industry is founded on air space sovereignty, the nationality rule, and the bilateral system of traffic rights. Europe’s exceptionalism has seen it evolve into a single air market with characteristics similar to a nation-state, rather than to become a force for wider industry change and regional multilateralism. Europe has indeed become a key actor in global aviation, but Europe seems to now be part of the industry’s status quo, not a vehicle for substantially wider multilateralism around the world. The findings from this research indicate that the bilateral system is not viewed by most study experts as disappearing or substantially weakening in the foreseeable future. However, regional multilateralism was also viewed as progressively taking hold in the industry in future, demonstrating that for most industry experts the two are not seen as mutually exclusive but rather as being able to co-exist with each other. This reality ensures that geographical location will continue to play an important role in the global airline industry in future and that, home base requirements will not disappear any time soon either. Even moves in some aviation jurisdictions to dilute nationality requirements for airlines, and instead replace ownership and control restrictions with principal place of business tests, do not ultimately free airlines from their home base. Likewise, an expansion of what constitutes home base to include a regional grouping of countries – again, a currently uncommon reality in global aviation – does not fundamentally weaken the continued relevance of geographical location to the global industry’s future growth and development realities and prospects.Keywords: airline industry, air space sovereignty, geographical location, home base
Procedia PDF Downloads 136391 Rheometer Enabled Study of Tissue/biomaterial Frequency-Dependent Properties
Authors: Polina Prokopovich
Abstract:
Despite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.Keywords: tissue, rheometer, biomaterial, cartilage
Procedia PDF Downloads 81390 A Simulation-Based Study of Dust Ingression into Microphone of Indoor Consumer Electronic Devices
Authors: Zhichao Song, Swanand Vaidya
Abstract:
Nowadays, most portable (e.g., smartphones) and wearable (e.g., smartwatches and earphones) consumer hardware are designed to be dustproof following IP5 or IP6 ratings to ensure the product is able to handle potentially dusty outdoor environments. On the other hand, the design guideline is relatively vague for indoor devices (e.g., smart displays and speakers). While it is generally believed that the indoor environment is much less dusty, in certain circumstances, dust ingression is still able to cause functional failures, such as microphone frequency response shift and camera black spot, or cosmetic dissatisfaction, mainly the dust build up in visible pockets and gaps which is hard to clean. In this paper, we developed a simulation methodology to analyze dust settlement and ingression into known ports of a device. A closed system is initialized with dust particles whose sizes follow Weibull distribution based on data collected in a user study, and dust particle movement was approximated as a settlement in stationary fluid, which is governed by Stokes’ law. Following this method, we simulated dust ingression into MEMS microphone through the acoustic port and protective mesh. Various design and environmental parameters are evaluated including mesh pore size, acoustic port depth-to-diameter ratio, mass density of dust material and inclined angle of microphone port. Although the dependencies of dust resistance on these parameters are all monotonic, smaller mesh pore size, larger acoustic depth-to-opening ratio and more inclined microphone placement (towards horizontal direction) are preferred for dust resistance; these preferences may represent certain trade-offs in audio performance and compromise in industrial design. The simulation results suggest the quantitative ranges of these parameters, with more pronounced effects in the improvement of dust resistance. Based on the simulation results, we proposed several design guidelines that intend to achieve an overall balanced design from audio performance, dust resistance, and flexibility in industrial design.Keywords: dust settlement, numerical simulation, microphone design, Weibull distribution, Stoke's equation
Procedia PDF Downloads 107389 Design Development of Floating Performance Structure for Coastal Areas in the Maltese Islands
Authors: Rebecca E. Dalli Gonzi, Joseph Falzon
Abstract:
Background: Islands in the Mediterranean region offer opportunities for various industries to take advantage of the facilitation and use of versatile floating structures in coastal areas. In the context of dense land use, marine structures can contribute to ensure both terrestrial and marine resource sustainability. Objective: The aim of this paper is to present and critically discuss an array of issues that characterize the design process of a floating structure for coastal areas and to present the challenges and opportunities of providing such multifunctional and versatile structures around the Maltese coastline. Research Design: A three-tier research design commenced with a systematic literature review. Semi-structured interviews with stakeholders including a naval architect, a marine engineer and civil designers were conducted. A second stage preceded a focus group with stakeholders in design and construction of marine lightweight structures. The three tier research design ensured triangulation of issues. All phases of the study were governed by research ethics. Findings: Findings were grouped into three main themes: excellence, impact and implementation. These included design considerations, applications and potential impacts on local industry. Literature for the design and construction of marine structures in the Maltese Islands presented multiple gaps in the application of marine structures for local industries. Weather conditions, depth of sea bed and wave actions presented limitations on the design capabilities of the structure. Conclusion: Water structures offer great potential and conclusions demonstrate the applicability of such designs for Maltese waters. There is still no such provision within Maltese coastal areas for multi-purpose use. The introduction of such facilities presents a range of benefits for visiting tourists and locals thereby offering wide range of services to tourism and marine industry. Costs for construction and adverse weather conditions were amongst the main limitations that shaped design capacities of the water structures.Keywords: coastal areas, lightweight, marine structure, multi purpose, versatile, floating device
Procedia PDF Downloads 161388 Slosh Investigations on a Spacecraft Propellant Tank for Control Stability Studies
Authors: Sarath Chandran Nair S, Srinivas Kodati, Vasudevan R, Asraff A. K
Abstract:
Spacecrafts generally employ liquid propulsion for their attitude and orbital maneuvers or raising it from geo-transfer orbit to geosynchronous orbit. Liquid propulsion systems use either mono-propellant or bi-propellants for generating thrust. These propellants are generally stored in either spherical tanks or cylindrical tanks with spherical end domes. The propellant tanks are provided with a propellant acquisition system/propellant management device along with vanes and their conical mounting structure to ensure propellant availability in the outlet for thrust generation even under a low/zero-gravity environment. Slosh is the free surface oscillations in partially filled containers under external disturbances. In a spacecraft, these can be due to control forces and due to varying acceleration. Knowledge of slosh and its effect due to internals is essential for understanding its stability through control stability studies. It is mathematically represented by a pendulum-mass model. It requires parameters such as slosh frequency, damping, sloshes mass and its location, etc. This paper enumerates various numerical and experimental methods used for evaluating the slosh parameters required for representing slosh. Numerical methods like finite element methods based on linear velocity potential theory and computational fluid dynamics based on Reynolds Averaged Navier Stokes equations are used for the detailed evaluation of slosh behavior in one of the spacecraft propellant tanks used in an Indian space mission. Experimental studies carried out on a scaled-down model are also discussed. Slosh parameters evaluated by different methods matched very well and finalized their dispersion bands based on experimental studies. It is observed that the presence of internals such as propellant management devices, including conical support structure, alters slosh parameters. These internals also offers one order higher damping compared to viscous/ smooth wall damping. It is an advantage factor for the stability of slosh. These slosh parameters are given for establishing slosh margins through control stability studies and finalize the spacecraft control system design.Keywords: control stability, propellant tanks, slosh, spacecraft, slosh spacecraft
Procedia PDF Downloads 244387 Technique for Online Condition Monitoring of Surge Arresters
Authors: Anil S. Khopkar, Kartik S. Pandya
Abstract:
Overvoltage in power systems is a phenomenon that cannot be avoided. However, it can be controlled to a certain extent. Power system equipment is to be protected against overvoltage to avoid system failure. Metal Oxide Surge Arresters (MOSA) are connected to the system for the protection of the power system against overvoltages. The MOSA will behave as an insulator under normal working conditions, where it offers a conductive path under voltage conditions. MOSA consists of zinc oxide elements (ZnO Blocks), which have non-linear V-I characteristics. ZnO blocks are connected in series and fitted in ceramic or polymer housing. This degrades due to the aging effect under continuous operation. Degradation of zinc oxide elements increases the leakage current flowing from the surge arresters. This Increased leakage current results in the increased temperature of the surge arrester, which further decreases the resistance of zinc oxide elements. As a result, leakage current increases, which again increases the temperature of a MOSA. This creates thermal runaway conditions for MOSA. Once it reaches the thermal runaway condition, it cannot return to normal working conditions. This condition is a primary cause of premature failure of surge arresters, as MOSA constitutes a core protective device for electrical power systems against transients. It contributes significantly to the reliable operation of the power system network. Hence, the condition monitoring of surge arresters should be done at periodic intervals. Online and Offline condition monitoring techniques are available for surge arresters. Offline condition monitoring techniques are not very popular as they require removing surge arresters from the system, which requires system shutdown. Hence, online condition monitoring techniques are very popular. This paper presents the evaluation technique for the surge arrester condition based on the leakage current analysis. Maximum amplitude of total leakage current (IT), Maximum amplitude of fundamental resistive leakage current (IR) and maximum amplitude of third harmonic resistive leakage current (I3rd) have been analyzed as indicators for surge arrester condition monitoring.Keywords: metal oxide surge arrester (MOSA), over voltage, total leakage current, resistive leakage current
Procedia PDF Downloads 66386 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety
Authors: Atheer Al-Nuaimi, Harry Evdorides
Abstract:
Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety
Procedia PDF Downloads 240385 Effect of Pulmonary Rehabilitation towards Length of Stay and IL-6 Level on Community-Acquired Pneumonia Patients
Authors: Santony Santony, Teguh Rahayu Sartono, Iin Noor Chozin
Abstract:
Introduction: Pneumonia is acute inflammation on lung parenchyma which is caused by bacteria, virus, fungi, or parasite. In Indonesia, Pneumonia is among the ten inpatient cases. Length of stay is related to the increased morbidity rate, nosocomial infection, and costs. The aim of this study is to assess the effect of pulmonary rehabilitation on the difference in length of stay and the level of Interleukin 6 (IL-6) as an inflammation biomarker for community-acquired pneumonia (CAP) patients in non-intensive rooms. Therefore, pulmonary rehabilitation as adjunctive therapy can be routinely exercised in order to shorten the length of stay, along with the decrease in IL-6 level. Methods: This study was conducted from May to October 2019 at Saiful Anwar General Hospital, Malang. 40 community-acquired pneumonia patients in non-intensive rooms were divided into two groups. 20 patients in the treatment group and 20 patients in the control group, all of them were selected through both inclusion and exclusion criteria. This study used simple consecutive random sampling. In the treatment group, pulmonary rehabilitation performed was composed of breathing exercise, effective coughing technique, clapping (percussion), postural drainage, as well as respiratory muscle training using incentive spirometry device. Pulmonary rehabilitation was conducted twice over five days with a minimum duration of 15 minutes. Blood samples were taken both on the first and the fifth day of the treatment to measure IL-6 level as an inflammation biomarker. Result: For the treatment group, the length of stay was 5.35 days whereas the control group 7.6 days. It can be seen that the treatment group had a shorter length of stay by 2.25 days (P<0,001). The IL-6 level on the first day for the treatment group was 36.27 pg/ml, whereas on the fifth day was 34.36 pg/ml. There was a decrease in IL-6 level on the fifth day of treatment even though it was not statistically significant (P=0.628). IL-6 level on the control group for the first day was 67.76 pg/ml, and after the fifth day, the level decreased to 54.43 pg/ml. There seemed to be a decrease in the IL-6, but it was not statistically significant (P=0.502). On the fifth day, the treatment group showed an average IL-6 level of 34.36 pg/ml. This value was lower than that of the control group which did not receive pulmonary rehabilitation having an IL-6 level of 54.43 pg/ml, even though it was not statistically significant (p=0.221). Conclusion: This study concluded that pulmonary rehabilitation as an adjunctive therapy shortened length of stay by 2.25 days for community-acquired pneumonia patients in a non-intensive room. Both groups experienced a decrease in IL-6 level on the fifth day in comparison with the first day even though it was not statistically significant P>0,05. IL-6 level as an inflammation biomarker decreased on the fifth day of treatment which was in accordance with improvement on pneumonia patients.Keywords: community-acquired pneumonia, interleukin-6, length of stay, pulmonary rehabilitation
Procedia PDF Downloads 102