Search results for: bipartite graphs
47 Thermal Analysis of Adsorption Refrigeration System Using Silicagel–Methanol Pair
Authors: Palash Soni, Vivek Kumar Gaba, Shubhankar Bhowmick, Bidyut Mazumdar
Abstract:
Refrigeration technology is a fast developing field at the present era since it has very wide application in both domestic and industrial areas. It started from the usage of simple ice coolers to store food stuffs to the present sophisticated cold storages along with other air conditioning system. A variety of techniques are used to bring down the temperature below the ambient. Adsorption refrigeration technology is a novel, advanced and promising technique developed in the past few decades. It gained attention due to its attractive property of exploiting unlimited natural sources like solar energy, geothermal energy or even waste heat recovery from plants or from the exhaust of locomotives to fulfill its energy need. This will reduce the exploitation of non-renewable resources and hence reduce pollution too. This work is aimed to develop a model for a solar adsorption refrigeration system and to simulate the same for different operating conditions. In this system, the mechanical compressor is replaced by a thermal compressor. The thermal compressor uses renewable energy such as solar energy and geothermal energy which makes it useful for those areas where electricity is not available. Refrigerants normally in use like chlorofluorocarbon/perfluorocarbon have harmful effects like ozone depletion and greenhouse warming. It is another advantage of adsorption systems that it can replace these refrigerants with less harmful natural refrigerants like water, methanol, ammonia, etc. Thus the double benefit of reduction in energy consumption and pollution can be achieved. A thermodynamic model was developed for the proposed adsorber, and a universal MATLAB code was used to simulate the model. Simulations were carried out for a different operating condition for the silicagel-methanol working pair. Various graphs are plotted between regeneration temperature, adsorption capacities, the coefficient of performance, desorption rate, specific cooling power, adsorption/desorption times and mass. The results proved that adsorption system could be installed successfully for refrigeration purpose as it has saving in terms of power and reduction in carbon emission even though the efficiency is comparatively less as compared to conventional systems. The model was tested for its compliance in a cold storage refrigeration with a cooling load of 12 TR.Keywords: adsorption, refrigeration, renewable energy, silicagel-methanol
Procedia PDF Downloads 20646 Analyzing the Causes of Amblyopia among Patients in Tertiary Care Center: Retrospective Study in King Faisal Specialist Hospital and Research Center
Authors: Hebah M. Musalem, Jeylan El-Mansoury, Lin M. Tuleimat, Selwa Alhazza, Abdul-Aziz A. Al Zoba
Abstract:
Background: Amblyopia is a condition that affects the visual system triggering a decrease in visual acuity without a known underlying pathology. It is due to abnormal vision development in childhood or infancy. Most importantly, vision loss is preventable or reversible with the right kind of intervention in most of the cases. Strabismus, sensory defects, and anisometropia are all well-known causes of amblyopia. However, ocular misalignment in Strabismus is considered the most common form of amblyopia worldwide. The risk of developing amblyopia increases in premature children, developmentally delayed or children who had brain lesions affecting the visual pathway. The prevalence of amblyopia varies between 2 to 5 % in the world according to the literature. Objective: To determine the different causes of Amblyopia in pediatric patients seen in ophthalmology clinic of a tertiary care center, i.e. King Faisal Specialist Hospital and Research Center (KFSH&RC). Methods: This is a hospital based, random retrospective, based on reviewing patient’s files in the Ophthalmology Department of KFSH&RC in Riyadh city, Kingdom of Saudi Arabia. Inclusion criteria: amblyopic pediatric patients who attended the clinic from 2015 to 2016, who are between 6 months and 18 years old. Exclusion Criteria: patients above 18 years of age and any patient who is uncooperative to obtain an accurate vision or a proper refraction. Detailed ocular and medical history are recorded. The examination protocol includes a full ocular exam, full cycloplegic refraction, visual acuity measurement, ocular motility and strabismus evaluation. All data were organized in tables and graphs and analyzed by statistician. Results: Our preliminary results will be discussed on spot by our corresponding author. Conclusions: We focused on this study on utilizing various examination techniques which enhanced our results and highlighted a distinguished correlation between amblyopia and its’ causes. This paper recommendation emphasizes on critical testing protocols to be followed among amblyopic patient, especially in tertiary care centers.Keywords: amblyopia, amblyopia causes, amblyopia diagnostic criterion, amblyopia prevalence, Saudi Arabia
Procedia PDF Downloads 15945 The Use of Social Media in the Recruitment Process as HR Strategy
Authors: Seema Sant
Abstract:
In the 21st century were four generation workforces are working, it’s crucial for organizations to build talent management strategy, as tech-savvy Gen Y has entered the work force. They are more connected to each other than ever – through the internet enabled Social media networks Social media has become important in today’s world. The users of such Social media sites have increased in multiple. From sharing their opinion for a brand/product to researching a company before going for an interview, making a conception about a company’s culture or following a Company’s updates due to sheer interest or for job vacancy, Work force today is constantly in touch with social networks. Thus corporate world has rightly realized its potential uses for business purpose. Companies now use social media for marketing, advertising, consumer survey, etc. For HR professionals, it is used for networking and connecting to the Talent pool- through Talent Community. Social recruiting is the process of sourcing or hiring candidates through the use of social sites such as LinkedIn, Facebook Twitter which provide them with an array of information about potential employee; this study represents an exploratory investigation on the role of social networking sites in recruitment. The primarily aim is to analyze the factors that can enhance the channel of recruitment used by of the recruiter with specific reference to the IT organizations in Mumbai, India. Particularly, the aim is to identify how and why companies use social media to attract and screen applicants during their recruitment processes. It also examines the advantages and limitations of recruitment through social media for employers. This is done by literature review. Further, the papers examine the recruiter impact and understand the various opportunities which have created due to technology, thus, to analyze and examine these factors, both primary, as well as secondary data, are collected for the study. The primary data are gathered from five HR manager working in five top IT organizations in Mumbai and 100 HR consultants’ i.e., recruiter. The data was collected by conducting a survey and supplying a closed-ended questionnaire. A comprehension analysis of the study is depicted through graphs and figures. From the analysis, it was observed that there exists a positive relationship between the level of employee recruited through social media and their organizational commitment. Finally the findings show that company’s i.e. recruiters are currently using social media in recruitment, but perhaps not as effective as they could be. The paper gives recommendations and conditions for success that can help employers to make the most out of social media in recruitment.Keywords: recruitment, social media, social sites, workforce
Procedia PDF Downloads 17944 Descriptive Epidemiology of Diphtheria Outbreak Data, Taraba State, Nigeria, August-November 2023
Authors: Folajimi Oladimeji Shorunke
Abstract:
Background: As of October 9, 2023, diphtheria has been noted to be re-emerging in four African countries: Algeria, Guinea, Niger, and Nigeria. 14,587 cases with a case fatality rate of 4.1% have been reported across these regions, with Nigeria alone responsible for over 90% of the cases. In Taraba State Nigeria, the index case of Diphtheria was reported on epidemic week 34, August 24, 2023 with 75 confirmed cases found 3 months after the index case and a case fatality of 1.3%. it described the distribution, trend and common symptoms found during the Outbreak. Methods: The Taraba State Diphtheria Outbreak line list on the Surveillance Outbreak Response Management & Analysis System (SORMAS) for all its 16 local government areas (LGAs) was analyzed using descriptive statistics (graphs, chats and maps) for the period between 24th August to 25th November 2023. Primary data was collected through the use of case investigation forms and variables like Age, gender, date of disease onset, LGA of residence, and symptoms exhibited were collected. Naso-pharyngeal and oro-pharyngeal samples were also collected for Laboratory confirmation. The most common diphtheria symptoms during the outbreak were also highlighted. Results: A total of 75 Diphtheria cases were diagnosed in 10 of the 16 LGAs in Taraba State between 24th August to 25th November 2023, 72% of the cases were female, with the age range 0-9 years having the highest proportion of 34 (45.3%), the number of positive diagnosis reduces with age among cases. The Northern part of the State had the highest proportion of cases, 68 (90.7%), with Ardo-Kola LGA having the highest 28 (29%). The remaining 9.2% of cases is shared among the middle belt and southern part of the State. The Epi-curve took the characteristic shape of a propagated infection with peaks at the 37th, 39th and 45th epidemic weeks. The most common symptoms found in cases were fever 71 (94.7%), pharyngitis 65( 86.7%), tonsillitis 60 (80%), and laryngitis 53 (71%). Conclusions: The number of confirmed cases of Diphtheria in Taraba State, Nigeria between 24th August to 25th November 2023 is 75. The condition is higher among females than male and mostly affected children between ages 0-9 with the northern part of the state most affected. The most common symptoms exhibited by cases include fever, pharyngitis, tonsillitis and laryngitis.Keywords: diphtheria outbreak, taraba nigeria, descriptive epidemiology, trend
Procedia PDF Downloads 6943 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 21242 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services
Authors: Carlos V. Gonzalez
Abstract:
Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.Keywords: harm, patients, resilience, safety, mental illness, disability
Procedia PDF Downloads 44741 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 17340 Evaluation of the Effect of Magnetic Field on Fibroblast Attachment in Contact with PHB/Iron Oxide Nanocomposite
Authors: Shokooh Moghadam, Mohammad Taghi Khorasani, Sajjad Seifi Mofarah, M. Daliri
Abstract:
Through the recent two decades, the use of magnetic-property materials with the aim of target cell’s separation and eventually cancer treatment has incredibly increased. Numerous factors can alter the efficacy of this method on curing. In this project, the effect of magnetic field on adhesion of PDL and L929 cells on nanocomposite of iron oxide/PHB with different density of iron oxides (1%, 2.5%, 5%) has been studied. The nanocamposite mentioned includes a polymeric film of poly hydroxyl butyrate and γ-Fe2O3 particles with the average size of 25 nanometer dispersed in it and during this process, poly vinyl alcohol with 98% hydrolyzed and 78000 molecular weight was used as an emulsion to achieve uniform distribution. In order to get the homogenous film, the solution of PHB and iron oxide nanoparticles were put in a dry freezer and in liquid nitrogen, which resulted in a uniform porous scaffold and for removing porosities a 100◦C press was used. After the synthesis of a desirable nanocomposite film, many different tests were performed, First, the particles size and their distribution in the film were evaluated by transmission electron microscopy (TEM) and even FTIR analysis and DMTA test were run in order to observe and accredit the chemical connections and mechanical properties of nanocomposites respectively. By comparing the graphs of case and control samples, it was established that adding nano particles caused an increase in crystallization temperature and the more density of γ-Fe2O3 lead to more Tg (glass temperature). Furthermore, its dispersion range and dumping property of samples were raised up. Moreover, the toxicity, morphologic changes and adhesion of fibroblast and cancer cells were evaluated by a variety of tests. All samples were grown in different density and in contact with cells for 24 and 48 hours within the magnetic fields of 2×10^-3 Tesla. After 48 hours, the samples were photographed with an optic and SEM and no sign of toxicity was traced. The number of cancer cells in the case of sample group was fairly more than the control group. However, there are many gaps and unclear aspects to use magnetic field and their effects in cancer and all diseases treatments yet to be discovered, not to neglect that there have been prominent step on this way in these recent years and we hope this project can be at least a minimum movement in this issue.Keywords: nanocomposite, cell attachment, magnetic field, cytotoxicity
Procedia PDF Downloads 25939 Introducing Principles of Land Surveying by Assigning a Practical Project
Authors: Introducing Principles of Land Surveying by Assigning a Practical Project
Abstract:
A practical project is used in an engineering surveying course to expose sophomore and junior civil engineering students to several important issues related to the use of basic principles of land surveying. The project, which is the design of a two-lane rural highway to connect between two arbitrary points, requires students to draw the profile of the proposed highway along with the existing ground level. Areas of all cross-sections are then computed to enable quantity computations between them. Lastly, Mass-Haul Diagram is drawn with all important parts and features shown on it for clarity. At the beginning, students faced challenges getting started on the project. They had to spend time and effort thinking of the best way to proceed and how the work would flow. It was even more challenging when they had to visualize images of cut, fill and mixed cross sections in three dimensions before they can draw them to complete the necessary computations. These difficulties were then somewhat overcome with the help of the instructor and thorough discussions among team members and/or between different teams. The method of assessment used in this study was a well-prepared-end-of-semester questionnaire distributed to students after the completion of the project and the final exam. The survey contained a wide spectrum of questions from students' learning experience when this course development was implemented to students' satisfaction of the class instructions provided to them and the instructor's competency in presenting the material and helping with the project. It also covered the adequacy of the project to show a sample of a real-life civil engineering application and if there is any excitement added by implementing this idea. At the end of the questionnaire, students had the chance to provide their constructive comments and suggestions for future improvements of the land surveying course. Outcomes will be presented graphically and in a tabular format. Graphs provide visual explanation of the results and tables, on the other hand, summarize numerical values for each student along with some descriptive statistics, such as the mean, standard deviation, and coefficient of variation for each student and each question as well. In addition to gaining experience in teamwork, communications, and customer relations, students felt the benefit of assigning such a project. They noticed the beauty of the practical side of civil engineering work and how theories are utilized in real-life engineering applications. It was even recommended by students that such a project be exercised every time this course is offered so future students can have the same learning opportunity they had.Keywords: land surveying, highway project, assessment, evaluation, descriptive statistics
Procedia PDF Downloads 22938 Climate Change Effects of Vehicular Carbon Monoxide Emission from Road Transportation in Part of Minna Metropolis, Niger State, Nigeria
Authors: H. M. Liman, Y. M. Suleiman A. A. David
Abstract:
Poor air quality often considered one of the greatest environmental threats facing the world today is caused majorly by the emission of carbon monoxide into the atmosphere. The principal air pollutant is carbon monoxide. One prominent source of carbon monoxide emission is the transportation sector. Not much was known about the emission levels of carbon monoxide, the primary pollutant from the road transportation in the study area. Therefore, this study assessed the levels of carbon monoxide emission from road transportation in the Minna, Niger State. The database shows the carbon monoxide data collected. MSA Altair gas alert detector was used to take the carbon monoxide emission readings in Parts per Million for the peak and off-peak periods of vehicular movement at the road intersections. Their Global Positioning System (GPS) coordinates were recorded in the Universal Transverse Mercator (UTM). Bar chart graphs were plotted by using the emissions level of carbon dioxide as recorded on the field against the scientifically established internationally accepted safe limit of 8.7 Parts per Million of carbon monoxide in the atmosphere. Further statistical analysis was also carried out on the data recorded from the field using the Statistical Package for Social Sciences (SPSS) software and Microsoft excel to show the variance of the emission levels of each of the parameters in the study area. The results established that emissions’ level of atmospheric carbon monoxide from the road transportation in the study area exceeded the internationally accepted safe limits of 8.7 parts per million. In addition, the variations in the average emission levels of CO between the four parameters showed that morning peak is having the highest average emission level of 24.5PPM followed by evening peak with 22.84PPM while morning off peak is having 15.33 and the least is evening off peak 12.94PPM. Based on these results, recommendations made for poor air quality mitigation via carbon monoxide emissions reduction from transportation include Introduction of the urban mass transit would definitely reduce the number of traffic on the roads, hence the emissions from several vehicles that would have been on the road. This would also be a cheaper means of transportation for the masses and Encouraging the use of vehicles using alternative sources of energy like solar, electric and biofuel will also result in less emission levels as the these alternative energy sources other than fossil fuel originated diesel and petrol vehicles do not emit especially carbon monoxide.Keywords: carbon monoxide, climate change emissions, road transportation, vehicular
Procedia PDF Downloads 37537 Influence of Atmospheric Circulation Patterns on Dust Pollution Transport during the Harmattan Period over West Africa
Authors: Ayodeji Oluleye
Abstract:
This study used Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) and reanalysis dataset of thirty years (1983-2012) to investigate the influence of the atmospheric circulation on dust transport during the Harmattan period over WestAfrica using TOMS data. The Harmattan dust mobilization and atmospheric circulation pattern were evaluated using a kernel density estimate which shows the areas where most points are concentrated between the variables. The evolution of the Inter-Tropical Discontinuity (ITD), Sea surface Temperature (SST) over the Gulf of Guinea, and the North Atlantic Oscillation (NAO) index during the Harmattan period (November-March) was also analyzed and graphs of the average ITD positions, SST and the NAO were observed on daily basis. The Pearson moment correlation analysis was also employed to assess the effect of atmospheric circulation on Harmattan dust transport. The results show that the departure (increased) of TOMS AI values from the long-term mean (1.64) occurred from around 21st of December, which signifies the rich dust days during winter period. Strong TOMS AI signal were observed from January to March with the maximum occurring in the latter months (February and March). The inter-annual variability of TOMSAI revealed that the rich dust years were found between 1984-1985, 1987-1988, 1997-1998, 1999-2000, and 2002-2004. Significantly, poor dust year was found between 2005 and 2006 in all the periods. The study has found strong north-easterly (NE) trade winds were over most of the Sahelianregion of West Africa during the winter months with the maximum wind speed reaching 8.61m/s inJanuary.The strength of NE winds determines the extent of dust transport to the coast of Gulf of Guinea during winter. This study has confirmed that the presence of the Harmattan is strongly dependent on theSST over Atlantic Ocean and ITD position. The locus of the average SST and ITD positions over West Africa could be described by polynomial functions. The study concludes that the evolution of near surface wind field at 925 hpa, and the variations of SST and ITD positions are the major large scale atmospheric circulation systems driving the emission, distribution, and transport of Harmattan dust aerosols over West Africa. However, the influence of NAO was shown to have fewer significance effects on the Harmattan dust transport over the region.Keywords: atmospheric circulation, dust aerosols, Harmattan, West Africa
Procedia PDF Downloads 31036 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries
Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman
Abstract:
There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems
Procedia PDF Downloads 14835 Effect of Ease of Doing Business to Economic Growth among Selected Countries in Asia
Authors: Teodorica G. Ani
Abstract:
Economic activity requires an encouraging regulatory environment and effective rules that are transparent and accessible to all. The World Bank has been publishing the annual Doing Business reports since 2004 to investigate the scope and manner of regulations that enhance business activity and those that constrain it. A streamlined business environment supporting the development of competitive small and medium enterprises (SMEs) may expand employment opportunities and improve the living conditions of low income households. Asia has emerged as one of the most attractive markets in the world. Economies in East Asia and the Pacific were among the most active in making it easier for local firms to do business. The study aimed to describe the ease of doing business and its effect to economic growth among selected economies in Asia for the year 2014. The study covered 29 economies in East Asia, Southeast Asia, South Asia and Middle Asia. Ease of doing business is measured by the Doing Business indicators (DBI) of the World Bank. The indicators cover ten aspects of the ease of doing business such as starting a business, dealing with construction permits, getting electricity, registering property, getting credit, protecting investors, paying taxes, trading across borders, enforcing contracts and resolving insolvency. In the study, Gross Domestic Product (GDP) was used as the proxy variable for economic growth. Descriptive research was the research design used. Graphical analysis was used to describe the income and doing business among selected economies. In addition, multiple regression was used to determine the effect of doing business to economic growth. The study presented the income among selected economies. The graph showed China has the highest income while Maldives produces the lowest and that observation were supported by gathered literatures. The study also presented the status of the ten indicators of doing business among selected economies. The graphs showed varying trends on how easy to start a business, deal with construction permits and to register property. Starting a business is easiest in Singapore followed by Hong Kong. The study found out that the variations in ease of doing business is explained by starting a business, dealing with construction permits and registering property. Moreover, an explanation of the regression result implies that a day increase in the average number of days it takes to complete a procedure will decrease the value of GDP in general. The research proposed inputs to policy which may increase the awareness of local government units of different economies on the simplification of the policies of the different components used in measuring doing business.Keywords: doing business, economic growth, gross domestic product, Asia
Procedia PDF Downloads 37934 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors
Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami
Abstract:
Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.Keywords: fault diagnosis, fault location, integrated sensors, PV modules
Procedia PDF Downloads 22433 Phytoremediation; Pb, Cr and Cd Accumulation in Fruits and Leaves of Vitis Vinifera L. From Air Pollutions and Intraction between Their Uptake Based on the Distance from the Main Road
Authors: Fatemeh Mohsennezhad
Abstract:
Air pollution is one of major problems for environment. Providing healthy food and protecting water sources from pollution has been one of the concerns of human societies and decision-making centers so that protecting food from pollution, detecting sources of pollution and measuring them become important. Nutritive and political significance of grape in this area, extensive use of leaf and fruit of this plant and development of urban areas around grape gardens and construction of Tabriz – Miandoab road, which is the most important link between East and West Azarbaijan, led us to examine the impact of this road construction and urban environment pollutants such as lead chromium and cadmium on the quality of this valuable crop. First, the samples were taken from different adjacent places and medium distances from the road, each place being located exactly by Google earth and GPS. Digestion was done through burning dry material and hydrochloric acid and their ashes were analyzed by atomic absorption to determine (Pb, Cr, Cd) accumulations. In this experiments effects of 2 following factors were examined as a variable: Garden distance from the main road with levels 1: For 50 meters, 2: For 120-200 meters, 3: For above 800 meters, and plant organ with levels 1: For fruit, 2: For leaves. At the end, the results were processed by SPSS software. 3.54 ppm, the most lead quantity, was at sample No. 54 in fruits with 800 meters distance from the road and 1.00 ppm was the least lead quantity at sample No. 50 in fruits with 1000 meters from the road. In leaves, the most lead quantity was 19.16 ppm at sample No. 15 with 50 meters distance from the road and the least quantity was 1.41 ppm at sample No. 31 with 50 meters from the road. Pb uptake is significantly different at 50 meters and 200 meters distance. It means that Pb uptake near the main road is the highest. But this result is not true for others elements. Distance has not a meaningful effect on Cr uptake. The result of analysis of variation in distance and plant organ for Cd showed that between fruit and leaf, Cd uptake is significantly different. But distance and interaction between distance and plant organ is not meaningful. There is neither meaningful interaction between these elements uptakes in fruits nor in leaves. If leaves and fruits, assumed all together, showed a very meaningful integration between heavy metal accumulations. It means that each of these elements causes uptake others without considering special organs. In the tested area, it became clear that, from the accumulation of heavy metals perspective, there is no meaningful difference in existing distance between road and garden. There is a meaningful difference among heavy metals accumulation. In other words, increase ratio of one metal to another was different from the resulted differences shown in corresponding graphs. Interaction among elements and distance between garden and road was not meaningful.Keywords: Vitis vinifera L., phytoremediation, heavy metals accumulation, lead, chromium, cadmium
Procedia PDF Downloads 35432 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 24031 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 35230 Assessment of Rainfall Erosivity, Comparison among Methods: Case of Kakheti, Georgia
Authors: Mariam Tsitsagi, Ana Berdzenishvili
Abstract:
Rainfall intensity change is one of the main indicators of climate change. It has a great influence on agriculture as one of the main factors causing soil erosion. Splash and sheet erosion are one of the most prevalence and harmful for agriculture. It is invisible for an eye at first stage, but the process will gradually move to stream cutting erosion. Our study provides the assessment of rainfall erosivity potential with the use of modern research methods in Kakheti region. The region is the major provider of wheat and wine in the country. Kakheti is located in the eastern part of Georgia and characterized quite a variety of natural conditions. The climate is dry subtropical. For assessment of the exact rate of rainfall erosion potential several year data of rainfall with short intervals are needed. Unfortunately, from 250 active metro stations running during the Soviet period only 55 of them are active now and 5 stations in Kakheti region respectively. Since 1936 we had data on rainfall intensity in this region, and rainfall erosive potential is assessed, in some old papers, but since 1990 we have no data about this factor, which in turn is a necessary parameter for determining the rainfall erosivity potential. On the other hand, researchers and local communities suppose that rainfall intensity has been changing and the number of haily days has also been increasing. However, finding a method that will allow us to determine rainfall erosivity potential as accurate as possible in Kakheti region is very important. The study period was divided into three sections: 1936-1963; 1963-1990 and 1990-2015. Rainfall erosivity potential was determined by the scientific literature and old meteorological stations’ data for the first two periods. And it is known that in eastern Georgia, at the boundary between steppe and forest zones, rainfall erosivity in 1963-1990 was 20-75% higher than that in 1936-1963. As for the third period (1990-2015), for which we do not have data of rainfall intensity. There are a variety of studies, where alternative ways of calculating the rainfall erosivity potential based on lack of data are discussed e.g.based on daily rainfall data, average annual rainfall data and the elevation of the area, etc. It should be noted that these methods give us a totally different results in case of different climatic conditions and sometimes huge errors in some cases. Three of the most common methods were selected for our research. Each of them was tested for the first two sections of the study period. According to the outcomes more suitable method for regional climatic conditions was selected, and after that, we determined rainfall erosivity potential for the third section of our study period with use of the most successful method. Outcome data like attribute tables and graphs was specially linked to the database of Kakheti, and appropriate thematic maps were created. The results allowed us to analyze the rainfall erosivity potential changes from 1936 to the present and make the future prospect. We have successfully implemented a method which can also be use for some another region of Georgia.Keywords: erosivity potential, Georgia, GIS, Kakheti, rainfall
Procedia PDF Downloads 22429 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 16028 Community Perception towards the Major Drivers for Deforestation and Land Degradation of Choke Afro-alpine and Sub-afro alpine Ecosystem, Northwest Ethiopia
Authors: Zelalem Teshager
Abstract:
The Choke Mountains have several endangered and endemic wildlife species and provide important ecosystem services. Despite their environmental importance, the Choke Mountains are found in dangerous conditions. This raised the need for an evaluation of the community's perception of deforestation and its major drivers and suggested possible solutions in the Choke Mountains of northwestern Ethiopia. For this purpose, household surveys, key informant interviews, and focus group discussions were used. A total sample of 102 informants was used for this survey. A purposive sampling technique was applied to select the participants for in-depth interviews and focus group discussions. Both qualitative and quantitative data analyses were used. Computation of descriptive statistics such as mean, percentages, frequency, tables, figures, and graphs was applied to organize, analyze, and interpret the study. This study assessed smallholder agricultural land expansion, Fuel wood collection, population growth; encroachment, free grazing, high demand of construction wood, unplanned resettlement, unemployment, border conflict, lack of a strong forest protecting system, and drought were the serious causes of forest depletion reported by local communities. Loss of land productivity, Soil erosion, soil fertility decline, increasing wind velocity, rising temperature, and frequency of drought were the most perceived impacts of deforestation. Most of the farmers have a holistic understanding of forest cover change. Strengthening forest protection, improving soil and water conservation, enrichment planting, awareness creation, payment for ecosystem services, and zero grazing campaigns were mentioned as possible solutions to the current state of deforestation. Applications of Intervention measures, such as animal fattening, beekeeping, and fruit production can contribute to decreasing the deforestation causes and improve communities’ livelihood. In addition, concerted efforts of conservation will ensure that the forests’ ecosystems contribute to increased ecosystem services. The major drivers of deforestation should be addressed with government intervention to change dependency on forest resources, income sources of the people, and institutional set-up of the forestry sector. Overall, further reduction in anthropogenic pressure is urgent and crucial for the recovery of the afro-alpine vegetation and the interrelated endangered wildlife in the Choke Mountains.Keywords: choke afro-alpine, deforestation, drivers, intervention measures, perceptions
Procedia PDF Downloads 5427 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India
Authors: Bharti Singh, Shri K. Singh
Abstract:
Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.Keywords: child nutrition, India, NFHS, women’s empowerment
Procedia PDF Downloads 3326 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 8225 Applicability of Polyisobutylene-Based Polyurethane Structures in Biomedical Disciplines: Some Calcification and Protein Adsorption Studies
Authors: Nihan Nugay, Nur Cicek Kekec, Kalman Toth, Turgut Nugay, Joseph P. Kennedy
Abstract:
In recent years, polyurethane structures are paving the way for elastomer usage in biology, human medicine, and biomedical application areas. Polyurethanes having a combination of high oxidative and hydrolytic stability and excellent mechanical properties are focused due to enhancing the usage of PUs especially for implantable medical device application such as cardiac-assist. Currently, unique polyurethanes consisting of polyisobutylenes as soft segments and conventional hard segments, named as PIB-based PUs, are developed with precise NCO/OH stoichiometry (∽1.05) for obtaining PIB-based PUs with enhanced properties (i.e., tensile stress increased from ∽11 to ∽26 MPa and elongation from ∽350 to ∽500%). Static and dynamic mechanical properties were optimized by examining stress-strain graphs, self-organization and crystallinity (XRD) traces, rheological (DMA, creep) profiles and thermal (TGA, DSC) responses. Annealing procedure was applied for PIB-based PUs. Annealed PIB-based PU shows ∽26 MPa tensile strength, ∽500% elongation, and ∽77 Microshore hardness with excellent hydrolytic and oxidative stability. The surface characters of them were examined with AFM and contact angle measurements. Annealed PIB-based PU exhibits the higher segregation of individual segments and surface hydrophobicity thus annealing significantly enhances hydrolytic and oxidative stability by shielding carbamate bonds by inert PIB chains. According to improved surface and microstructure characters, greater efforts are focused on analyzing protein adsorption and calcification profiles. In biomedical applications especially for cardiological implantations, protein adsorption inclination on polymeric heart valves is undesirable hence protein adsorption from blood serum is followed by platelet adhesion and subsequent thrombus formation. The protein adsorption character of PIB-based PU examines by applying Bradford assay in fibrinogen and bovine serum albumin solutions. Like protein adsorption, calcium deposition on heart valves is very harmful because vascular calcification has been proposed activation of osteogenic mechanism in the vascular wall, loss of inhibitory factors, enhance bone turnover and irregularities in mineral metabolism. The calcium deposition on films are characterized by incubating samples in simulated body fluid solution and examining SEM images and XPS profiles. PIB-based PUs are significantly more resistant to hydrolytic-oxidative degradation, protein adsorption and calcium deposition than ElastEonTM E2A, a commercially available PDMS-based PU, widely used for biomedical applications.Keywords: biomedical application, calcification, polyisobutylene, polyurethane, protein adsorption
Procedia PDF Downloads 25724 Health Literacy: Collaboration between Clinician and Patient
Authors: Cathy Basterfield
Abstract:
Issue: To engage in one’s own health care, health professionals need to be aware of an individual’s specific skills and abilities for best communication. One of the most discussed is health literacy. One of the assumed skills and abilities for adults is an individuals’ health literacy. Background: A review of publicly available health content appears to assume all adult readers will have a broad and full capacity to read at a high level of literacy, often at a post-school education level. Health information writers and clinicians need to recognise one critical area for why there may be little or no change in a person’s behaviour, or no-shows to appointments. Perhaps unintentionally, they are miscommunicating with the majority of the adult population. Health information contains many literacy domains. It usually includes technical medical terms or jargon. Many fact sheets and other information require scientific literacy with or without specific numerical literacy. It may include graphs, percentages, timing, distance, or weights. Each additional word or concept in these domains decreases the readers' ability to meaningfully read, understand and know what to do with the information. An attempt to begin to read the heading where long or unfamiliar words are used will reduce the readers' motivation to attempt to read. Critically people who have low literacy are overwhelmed when pages are covered with lots of words. People attending a health environment may be unwell or anxious about a diagnosis. These make it harder to read, understand and know what to do with the information. But access to health information must consider an even wider range of adults, including those with poor school attainment, migrants, and refugees. It is also homeless people, people with mental health illnesses, or people who are ageing. People with low literacy also may include people with lifelong disabilities, people with acquired disabilities, people who read English as a second (or third) language, people who are Deaf, or people who are vision impaired. Outcome: This paper will discuss Easy English, which is developed for adults. It uses the audiences’ everyday words, short sentences, short words, and no jargon. It uses concrete language and concrete, specific images to support the text. It has been developed in Australia since the mid-2000s. This paper will showcase various projects in the health domain which use Easy English to improve the understanding and functional use of written information for the large numbers of adults in our communities who do not have the health literacy to manage a range of day to day reading tasks. See examples from consent forms, fact sheets and choice options, instructions, and other functional documents, where Easy English has been developed. This paper will ask individuals to reflect on their own work practice and consider what written information must be available in Easy English. It does not matter how cutting-edge a new treatment is; when adults can not read or understand what it is about and the positive and negative outcomes, they are less likely to be engaged in their own health journey.Keywords: health literacy, inclusion, Easy English, communication
Procedia PDF Downloads 12523 Recognition of Spelling Problems during the Text in Progress: A Case Study on the Comments Made by Portuguese Students Newly Literate
Authors: E. Calil, L. A. Pereira
Abstract:
The acquisition of orthography is a complex process, involving both lexical and grammatical questions. This learning occurs simultaneously with the domain of multiple textual aspects (e.g.: graphs, punctuation, etc.). However, most of the research on orthographic acquisition focus on this acquisition from an autonomous point of view, separated from the process of textual production. This means that their object of analysis is the production of words selected by the researcher or the requested sentences in an experimental and controlled setting. In addition, the analysis of the Spelling Problems (SP) are identified by the researcher on the sheet of paper. Considering the perspective of Textual Genetics, from an enunciative approach, this study will discuss the SPs recognized by dyads of newly literate students, while they are writing a text collaboratively. Six proposals of textual production were registered, requested by a 2nd year teacher of a Portuguese Primary School between January and March 2015. In our case study we discuss the SPs recognized by the dyad B and L (7 years old). We adopted as a methodological tool the Ramos System audiovisual record. This system allows real-time capture of the text in process and of the face-to-face dialogue between both students and their teacher, and also captures the body movements and facial expressions of the participants during textual production proposals in the classroom. In these ecological conditions of multimodal registration of collaborative writing, we could identify the emergence of SP in two dimensions: i. In the product (finished text): SP identification without recursive graphic marks (without erasures) and the identification of SPs with erasures, indicating the recognition of SP by the student; ii. In the process (text in progress): identification of comments made by students about recognized SPs. Given this, we’ve analyzed the comments on identified SPs during the text in progress. These comments characterize a type of reformulation referred to as Commented Oral Erasure (COE). The COE has two enunciative forms: Simple Comment (SC) such as ' 'X' is written with 'Y' '; or Unfolded Comment (UC), such as ' 'X' is written with 'Y' because...'. The spelling COE may also occur before or during the SP (Early Spelling Recognition - ESR) or after the SP has been entered (Later Spelling Recognition - LSR). There were 631 words entered in the 6 stories written by the B-L dyad, 145 of them containing some type of SP. During the text in progress, the students recognized orally 174 SP, 46 of which were identified in advance (ESRs) and 128 were identified later (LSPs). If we consider that the 88 erasure SPs in the product indicate some form of SP recognition, we can observe that there were twice as many SPs recognized orally. The ESR was characterized by SC when students asked their colleague or teacher how to spell a given word. The LSR presented predominantly UC, verbalizing meta-orthographic arguments, mostly made by L. These results indicate that writing in dyad is an important didactic strategy for the promotion of metalinguistic reflection, favoring the learning of spelling.Keywords: collaborative writing, erasure, learning, metalinguistic awareness, spelling, text production
Procedia PDF Downloads 16322 The Effects of the New Silk Road Initiatives and the Eurasian Union to the East-Central-Europe’s East Opening Policies
Authors: Tamas Dani
Abstract:
The author’s research explores the geo-economical role and importance of some small and medium sized states, reviews their adaption strategies in foreign trade and also in foreign affairs in the course of changing into a multipolar world, uses international background. With these, the paper analyses the recent years and the future of ‘Opening towards Eastern foreign economic policies’ from East-Central Europe and parallel with that the ‘Western foreign economy policies’ from Asia, as the Chinese One Belt One Road new silk route plans (so far its huge part is an infrastructural development plan to reach international trade and investment aims). It can be today’s question whether these ideas will reshape the global trade or not. How does the new silk road initiatives and the Eurasian Union reflect the effect of globalization? It is worth to analyse that how did Central and Eastern European countries open to Asia; why does China have the focus of the opening policies in many countries and why could China be seen as the ‘winner’ of the world economic crisis after 2008. The research is based on the following methodologies: national and international literature, policy documents and related design documents, complemented by processing of international databases, statistics and live interviews with leaders from East-Central European countries’ companies and public administration, diplomats and international traders. The results also illustrated by mapping and graphs. The research will find out as major findings whether the state decision-makers have enough margin for manoeuvres to strengthen foreign economic relations. This work has a hypothesis that countries in East-Central Europe have real chance to diversify their relations in foreign trade, focus beyond their traditional partners. This essay focuses on the opportunities of East-Central-European countries in diversification of foreign trade relations towards China and Russia in terms of ‘Eastern Openings’. The effects of the new silk road initiatives and the Eurasian Union to Hungary’s economy with a comparing outlook on East-Central European countries and exploring common regional cooperation opportunities in this area. The essay concentrate on the changing trade relations between East-Central-Europe and China as well as Russia, try to analyse the effects of the new silk road initiatives and the Eurasian Union also. In the conclusion part, it shows how the cooperation is necessary for the East-Central European countries if they want to have a non-asymmetric trade with Russia, China or some Chinese regions (Pearl River Delta, Hainan, …). The form of the cooperation for the East-Central European nations can be Visegrad 4 Cooperation (V4), Central and Eastern European Countries (CEEC16), 3 SEAS Cooperation (or BABS – Baltic, Adriatic, Black Seas Initiative).Keywords: China, East-Central Europe, foreign trade relations, geoeconomics, geopolitics, Russia
Procedia PDF Downloads 18221 Effect of Silica Nanoparticles on Three-Point Flexural Properties of Isogrid E-Glass Fiber/Epoxy Composite Structures
Authors: Hamed Khosravi, Reza Eslami-Farsani
Abstract:
Increased interest in lightweight and efficient structural components has created the need for selecting materials with improved mechanical properties. To do so, composite materials are being widely used in many applications, due to durability, high strength and modulus, and low weight. Among the various composite structures, grid-stiffened structures are extensively considered in various aerospace and aircraft applications, because of higher specific strength and stiffness, higher impact resistance, superior load-bearing capacity, easy to repair, and excellent energy absorption capability. Although there are a good number of publications on the design aspects and fabrication of grid structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Therefore, the aim of this research is to study the reinforcing effect of silica nanoparticles on the flexural properties of epoxy/E-glass isogrid panels under three-point bending test. Samples containing 0, 1, 3, and 5 wt.% of the silica nanoparticles, with 44 and 48 vol.% of the glass fibers in the ribs and skin components respectively, were fabricated by using a manual filament winding method. Ultrasonic and mechanical routes were employed to disperse the nanoparticles within the epoxy resin. To fabricate the ribs, the unidirectional fiber rovings were impregnated with the matrix mixture (epoxy + nanoparticles) and then laid up into the grooves of a silicone mold layer-by-layer. At once, four plies of woven fabrics, after impregnating into the same matrix mixture, were layered on the top of the ribs to produce the skin part. In order to conduct the ultimate curing and to achieve the maximum strength, the samples were tested after 7 days of holding at room temperature. According to load-displacement graphs, the bellow trend was observed for all of the samples when loaded from the skin side; following an initial linear region and reaching a load peak, the curve was abruptly dropped and then showed a typical absorbed energy region. It would be worth mentioning that in these structures, a considerable energy absorption was observed after the primary failure related to the load peak. The results showed that the flexural properties of the nanocomposite samples were always higher than those of the nanoparticle-free sample. The maximum enhancement in flexural maximum load and energy absorption was found to be for the incorporation of 3 wt.% of the nanoparticles. Furthermore, the flexural stiffness was continually increased by increasing the silica loading. In conclusion, this study suggested that the addition of nanoparticles is a promising method to improve the flexural properties of grid-stiffened fibrous composite structures.Keywords: grid-stiffened composite structures, nanocomposite, three point flexural test , energy absorption
Procedia PDF Downloads 34120 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 11919 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 12718 Evaluation of the Risk Factors on the Incidence of Adjacent Segment Degeneration After Anterior Neck Discectomy and Fusion
Authors: Sayyed Mostafa Ahmadi, Neda Raeesi
Abstract:
Background and Objectives: Cervical spondylosis is a common problem that affects the adult spine and is the most common cause of radiculopathy and myelopathy in older patients. Anterior discectomy and fusion is a well-known technique in degenerative cervical disc disease. However, one of the late undesirable complications is adjacent disc degeneration, which affects about 91% of patients in ten years. Many factors can be effective in causing this complication, but some are still debatable. Discovering these risk factors and eliminating them can improve the quality of life. Methods: This is a retrospective cohort study. All patients who underwent anterior discectomy and fusion surgery in the neurosurgery ward of Imam Khomeini Hospital between 2013 and 2016 were evaluated. Their demographic information was collected. All patients were visited and examined for radiculopathy, myelopathy, and muscular force. At the same visit, all patients were asked to have a facelift, and neck profile, as well as a neck MRI(General Tesla 3). Preoperative graphs were used to measure the diameter of the cervical canal(Pavlov ratio) and to evaluate sagittal alignment(Cobb Angle). Preoperative MRI of patients was reviewed for anterior and posterior longitudinal ligament calcification. Result: In this study, 57 patients were studied. The mean age of patients was 50.63 years, and 49.1% were male. Only 3.5% of patients had anterior and posterior longitudinal ligament calcification. Symptomatic ASD was observed in 26.6%. The X-rays and MRIs showed evidence of 80.7% radiological ASD. Among patients who underwent one-level surgery, 20% had symptomatic ASD, but among patients who underwent two-level surgery, the rate of ASD was 50%.In other words, the higher the number of surfaces that are operated and fused, the higher the probability of symptomatic ASD(P-value <0.05). The X-rays and MRIs showed 80.7% of radiological ASD. Among patients who underwent surgery at one level, 78% had radiological ASD, and this number was 92% among patients who underwent two-level surgery(P-value> 0.05). Demographic variables such as age, sex, height, weight, and BMI did not have a significant effect on the incidence of radiological ASD(P-value> 0.05), but sex and height were two influential factors on symptomatic ASD(P-value <0.05). Other related variables such as family history, smoking and exercise also have no significant effect(P-value> 0.05). Radiographic variables such as Pavlov ratio and sagittal alignment were also unaffected by the incidence of radiological and symptomatic ASD(P-value> 0.05). The number of surgical surfaces and the incidence of anterior and posterior longitudinal ligament calcification before surgery also had no statistically significant effect(P-value> 0.05). In the study of the ability of the neck to move in different directions, none of these variables are statistically significant in the two groups with radiological and symptomatic ASD and the non-affected group(P-value> 0.05). Conclusion: According to the findings of this study, this disease is considered to be a multifactorial disease. The incidence of radiological ASD is much higher than symptomatic ASD (80.7% vs. 26.3%) and sex, height and number of fused surfaces are the only factors influencing the incidence of symptomatic ASD and no variable influences radiological ASD.Keywords: risk factors, anterior neck disectomy and fusion, adjucent segment degeneration, complication
Procedia PDF Downloads 60