Search results for: time domain reflectometry (TDR)
14305 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample
Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov
Abstract:
An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography
Procedia PDF Downloads 41314304 Pre-Industrial Local Architecture According to Natural Properties
Authors: Selin Küçük
Abstract:
Pre-industrial architecture is integration of natural and subsequent properties by intelligence and experience. Since various settlements relatively industrialized or non-industrialized at any time, ‘pre-industrial’ term does not refer to a definite time. Natural properties, which are existent conditions and materials in natural local environment, are climate, geomorphology and local materials. Subsequent properties, which are all anthropological comparatives, are culture of societies, requirements of people and construction techniques that people use. Yet, after industrialization, technology took technique’s place, cultural effects are manipulated, requirements are changed and local/natural properties are almost disappeared in architecture. Technology is universal, global and expands simply; conversely technique is time and experience dependent and should has a considerable cultural background. This research is about construction techniques according to natural properties of a region and classification of these techniques. Understanding local architecture is only possible by searching its background which is hard to reach. There are always changes in positive and negative in architectural techniques through the time. Archaeological layers of a region sometimes give more accurate information about transformation of architecture. However, natural properties of any region are the most helpful elements to perceive construction techniques. Many international sources from different cultures are interested in local architecture by mentioning natural properties separately. Unfortunately, there is no literature deals with this subject as far as systematically in the correct way. This research aims to improve a clear perspective of local architecture existence by categorizing archetypes according to natural properties. The ultimate goal of this research is generating a clear classification of local architecture independent from subsequent (anthropological) properties over the world such like a handbook. Since local architecture is the most sustainable architecture with refer to its economic, ecologic and sociological properties, there should be an excessive information about construction techniques to be learned from. Constructing the same buildings in all over the world is one of the main criticism of modern architectural system. While this critics going on, the same buildings without identity increase incrementally. In post-industrial term, technology widely took technique’s place, yet cultural effects are manipulated, requirements are changed and natural local properties are almost disappeared in architecture. These study does not offer architects to use local techniques, but it indicates the progress of pre-industrial architectural evolution which is healthier, cheaper and natural. Immigration from rural areas to developing/developed cities should be prohibited, thus culture and construction techniques can be preserved. Since big cities have psychological, sensational and sociological impact on people, rural settlers can be convinced to not to immigrate by providing new buildings designed according to natural properties and maintaining their settlements. Improving rural conditions would remove the economical and sociological gulf between cities and rural. What result desired to arrived in, is if there is no deformation (adaptation process of another traditional buildings because of immigration) or assimilation in a climatic region, there should be very similar solutions in the same climatic regions of the world even if there is no relationship (trade, communication etc.) among them.Keywords: climate zones, geomorphology, local architecture, local materials
Procedia PDF Downloads 42914303 Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection
Authors: Marrone Silverio Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner, Djamel Fawzi Hadj Sadok
Abstract:
The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981.Keywords: RJ45, automatic annotation, object tracking, 3D projection
Procedia PDF Downloads 16714302 Preparation of Activated Carbon From Waste Feedstock: Activation Variables Optimization and Influence
Authors: Oluwagbemi Victor Aladeokin
Abstract:
In the last decade, the global peanut cultivation has seen increased demand, which is attributed to their health benefits, rising to ~ 41.4 MMT in 2019/2020. Peanut and other nutshells are considered as waste in various parts of the world and are usually used for their fuel value. However, this agricultural by-product can be converted to a higher value product such as activated carbon. For many years, due to the highly porous structure of activated carbon, it has been widely and effectively used as an adsorbent in the purification and separation of gases and liquids. Those used for commercial purposes are primarily made from a range of precursors such as wood, coconut shell, coal, bones, etc. However, due to difficulty in regeneration and high cost, various agricultural residues such as rice husk, corn stalks, apricot stones, almond shells, coffee beans, etc, have been explored to produce activated carbons. In the present study, the potential of peanut shells as precursors in the production of activated carbon and their adsorption capacity is investigated. Usually, precursors used to produce activated carbon have carbon content above 45 %. A typical raw peanut shell has 42 wt.% carbon content. To increase the yield, this study has employed chemical activation method using zinc chloride. Zinc chloride is well known for its effectiveness in increasing porosity of porous carbonaceous materials. In chemical activation, activation temperature and impregnation ratio are parameters commonly reported to be the most significant, however, this study has also studied the influence of activation time on the development of activated carbon from peanut shells. Activated carbons are applied for different purposes, however, as the application of activated carbon becomes more specific, an understanding of the influence of activation variables to have a better control of the quality of the final product becomes paramount. A traditional approach to experimentally investigate the influence of the activation parameters, involves varying each parameter at a time. However, a more efficient way to reduce the number of experimental runs is to apply design of experiment. One of the objectives of this study is to optimize the activation variables. Thus, this work has employed response surface methodology of design of experiment to study the interactions between the activation parameters and consequently optimize the activation parameters (temperature, impregnation ratio, and activation time). The optimum activation conditions found were 485 °C, 15 min and 1.7, temperature, activation time, and impregnation ratio respectively. The optimum conditions resulted in an activated carbon with relatively high surface area ca. 1700 m2/g, 47 % yield, relatively high density, low ash, and high fixed carbon content. Impregnation ratio and temperature were found to mostly influence the final characteristics of the produced activated carbon from peanut shells. The results of this study, using response surface methodology technique, have revealed the potential and the most significant parameters that influence the chemical activation process, of peanut shells to produce activated carbon which can find its use in both liquid and gas phase adsorption applications.Keywords: chemical activation, fixed carbon, impregnation ratio, optimum, surface area
Procedia PDF Downloads 14514301 Biomass and Biogas Yield of Maize as Affected by Nitrogen Rates with Varying Harvesting under Semi-Arid Condition of Pakistan
Authors: Athar Mahmood, Asad Ali
Abstract:
Management considerations including harvesting time and nitrogen application considerably influence the biomass yield, quality and biogas production. Therefore, a field study was conducted to determine the effect of various harvesting times and nitrogen rates on the biomass yield, quality and biogas yield of maize crop. This experiment was consisted of various harvesting times i.e., harvesting after 45, 55 and 65 days of sowing (DAS) and nitrogen rates i.e., 0, 100, 150 and 200 kg ha-1 respectively. The data indicated that maximum plant height, leaf area, dry matter (DM) yield, protein, acid detergent fiber, neutral detergent fiber, crude fiber contents and biogas yield were recorded 65 days after sowing while lowest was recorded 45 days after sowing. In contrary to that significantly higher chlorophyll contents were observed at 45 DAS. In case of nitrogen rates maximum plant height, leaf area, and DM yield, protein contents, ash contents, acid detergent fiber, neutral detergent fiber, crude fiber contents and chlorophyll contents were determined with nitrogen at the rate of 200 kg ha-1, while minimum was observed when no N was applied. Therefore, harvesting 65 DAS and N application @ 200 kg ha-1 can be suitable for getting the higher biomass and biogas production.Keywords: chemical composition, fiber contents, biogas, nitrogen, harvesting time
Procedia PDF Downloads 16014300 ‘A Ghost of One’s Own’: Spectral Intrusions and Trauma in the Poetry of Joanna Baillie and Anne Bannerman
Authors: Elli Karampela
Abstract:
In Specters of Marx (1993), Jacques Derrida refers to the ghost as an Other presence that occupies the space of the self and emanates from there, haunting in its shadowy pastness and threatening/striving to break free. In times of change, ghosts both reflect the dissolution of set principles and voice traumas of the past that create a sense of fear and instability. This paper observes the way female ghosts create connections with the living in the poetry of Joanna Baillie and Anne Bannerman, both integral, albeit under-researched in different ways, writers of the English Romantic period working in the aftermath of the French Revolution. Especially at the beginning of the nineteenth century, when ghost narratives were devoured by readers and enjoyed as stories that re-awakened sensation in times of revolution, there was at the same time fear of intrusion by terror’s unruly forces that threatened to turn the readers restless. The ghost was particularly dangerous because it was associated with memory and the intrusion of past trauma in the here and now. As will be seen, both Baillie and Bannerman explore the idea of the female ghost’s ‘return’ (a Freudian term that will be approached) which breaks both time and space boundaries to raise the suppressed female voice, threaten stability, and correct wrongs. As a result, the varied manifestations of female ghosts render Baillie and Bannerman active in the contemporary discourse about human rights and the reclamation of the agency.Keywords: poetry, romanticism, spectrality, trauma, women
Procedia PDF Downloads 21314299 Clinical and Sleep Features in an Australian Population Diagnosed with Mild Cognitive Impairment
Authors: Sadie Khorramnia, Asha Bonney, Kate Galloway, Andrew Kyoong
Abstract:
Sleep plays a pivotal role in the registration and consolidation of memory. Multiple observational studies have demonstrated that self-reported sleep duration and sleep quality are associated with cognitive performance. Montreal Cognitive Assessment questionnaire is a screening tool to assess mild cognitive (MCI) impairment with a 90% diagnostic sensitivity. In our current study, we used MOCA to identify MCI in patients who underwent sleep study in our sleep department. We then looked at the clinical risk factors and sleep-related parameters in subjects found to have mild cognitive impairment but without a diagnosis of sleep-disordered breathing. Clinical risk factors, including physician, diagnosed hypertension, diabetes, and depression and sleep-related parameters, measured during sleep study, including percentage time of each sleep stage, total sleep time, awakenings, sleep efficiency, apnoea hypopnoea index, and oxygen saturation, were evaluated. A total of 90 subjects who underwent sleep study between March 2019 and October 2019 were included. Currently, there is no pharmacotherapy available for MCI; therefore, identifying the risk factors and attempting to reverse or mitigate their effect is pivotal in slowing down the rate of cognitive deterioration. Further characterization of sleep parameters in this group of patients could open up opportunities for potentially beneficial interventions.Keywords: apnoea hypopnea index, mild cognitive impairment, sleep architecture, sleep study
Procedia PDF Downloads 14414298 BeamGA Median: A Hybrid Heuristic Search Approach
Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte
Abstract:
The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.Keywords: median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance
Procedia PDF Downloads 26514297 Mitigation of Cascading Power Outage Caused Power Swing Disturbance Using Real-time DLR Applications
Authors: Dejenie Birile Gemeda, Wilhelm Stork
Abstract:
The power system is one of the most important systems in modern society. The existing power system is approaching the critical operating limits as views of several power system operators. With the increase of load demand, high capacity and long transmission networks are widely used to meet the requirement. With the integration of renewable energies such as wind and solar, the uncertainty, intermittence bring bigger challenges to the operation of power systems. These dynamic uncertainties in the power system lead to power disturbances. The disturbances in a heavily stressed power system cause distance relays to mal-operation or false alarms during post fault power oscillations. This unintended operation of these relays may propagate and trigger cascaded trappings leading to total power system blackout. This is due to relays inability to take an appropriate tripping decision based on ensuing power swing. According to the N-1 criterion, electric power systems are generally designed to withstand a single failure without causing the violation of any operating limit. As a result, some overloaded components such as overhead transmission lines can still work for several hours under overload conditions. However, when a large power swing happens in the power system, the settings of the distance relay of zone 3 may trip the transmission line with a short time delay, and they will be acting so quickly that the system operator has no time to respond and stop the cascading. Misfiring of relays in absence of fault due to power swing may have a significant loss in economic performance, thus a loss in revenue for power companies. This research paper proposes a method to distinguish stable power swing from unstable using dynamic line rating (DLR) in response to power swing or disturbances. As opposed to static line rating (SLR), dynamic line rating support effective mitigation actions against propagating cascading outages in a power grid. Effective utilization of existing transmission lines capacity using machine learning DLR predictions will improve the operating point of distance relay protection, thus reducing unintended power outages due to power swing.Keywords: blackout, cascading outages, dynamic line rating, power swing, overhead transmission lines
Procedia PDF Downloads 14314296 Between Ralph Waldo Emerson and the Dying Infidel
Authors: Michael Keller
Abstract:
Beyond the heterodoxy expressed in his now-famous 1838 address to the Harvard Divinity School, Emerson’s timing was particularly dangerous. Ideologically, New England faced a severe crisis of identity, as traditional categories of class and religion were growing increasingly unstable. Jones Very, influenced by Emerson, crossed the perceived border between acceptable religious zeal and insane enthusiasm. Abner Kneeland, on the other hand, crossed the uncomfortable border between post-Puritan Unitarian rationalism and blasphemous Enlightenment skepticism. More importantly, Kneeland oversaw a more overtly subversive brand of resistance (in the form of freethought periodicals) that not only threatened religious orthodoxy but also threatened to destabilize the class structure of New England. Very and Kneeland provide instructive case studies of how religious ideologies could run afoul of the social contract and the law itself. By looking closely at the social and religious forces that led to Kneeland’s prosecution for blasphemy, Jones Very’s forced committal to McLean Asylum, and Emerson’s escape from these fates, we gain a greater understanding of the shifting cultural landscape of 1830s New England. This paper will examine Emerson’s resistance to the traditional forces of class and ideology in Massachusetts by situating his early work in the context of the ideological battles of his time. More specifically, I will explore how Emerson was able to resist the conservative cultural forces of his time without experiencing the extremity of their wrath.Keywords: American literature, cultural studies, emerson, religious studies
Procedia PDF Downloads 14114295 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration
Authors: Elżbieta Antczak
Abstract:
Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).Keywords: greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure
Procedia PDF Downloads 28614294 A Model for Predicting Organic Compounds Concentration Change in Water Associated with Horizontal Hydraulic Fracturing
Authors: Ma Lanting, S. Eguilior, A. Hurtado, Juan F. Llamas Borrajo
Abstract:
Horizontal hydraulic fracturing is a technology to increase natural gas flow and improve productivity in the low permeability formation. During this drilling operation tons of flowback and produced water which contains many organic compounds return to the surface with a potential risk of influencing the surrounding environment and human health. A mathematical model is urgently needed to represent organic compounds in water transportation process behavior and the concentration change with time throughout the hydraulic fracturing operation life cycle. A comprehensive model combined Organic Matter Transport Dynamic Model with Two-Compartment First-order Model Constant (TFRC) Model has been established to quantify the organic compounds concentration. This algorithm model is composed of two transportation parts based on time factor. For the fast part, the curve fitting technique is applied using flowback water data from the Marcellus shale gas site fracturing and the coefficients of determination (R2) from all analyzed compounds demonstrate a high experimental feasibility of this numerical model. Furthermore, along a decade of drilling the concentration ratio curves have been estimated by the slow part of this model. The result shows that the larger value of Koc in chemicals, the later maximum concentration in water will reach, as well as all the maximum concentrations percentage would reach up to 90% of initial concentration from shale formation within a long sufficient period.Keywords: model, shale gas, concentration, organic compounds
Procedia PDF Downloads 22614293 Resilence and Adaptation to Water Scarcity in San Martín de las Palmas, Santiago Tilantongo, Nochixtlán Oaxaca
Authors: E. Montesinos-Pedro, L. G. Toscano-Flores, N. Domínguez-Ramírez
Abstract:
Water scarcity is a worldwide issue, coupled with climate change is a relevant problem, that affect not only large cities, but also rural areas. The Municipality of Santiago Tilantongo belongs to the district of Nochixtlán Oaxaca, it’s built up from 14 communities, one of them San Martin de las Palmas. This community was founded in 1900, at that time the inhabitants were supplied with water through rivers of the region which were abundant (they used containers filled in the river for that purpose); However, over the years the level of the rivers began to drop and in 1994 specific wells were located to store water and at the same time make it drinkable, this whit support of the state of Oaxaca and the program Procampo. By the year 2000 the shortage of water in the supply sources was notorious, the community requested support from the Oaxaca State government to solve the problem. The government’s response consisted in the implementation of ferro-cement tanks (2005) and water wells (2010), both for rainwater collection, Hower, it was not enough. Now days the community has a population of 60 inhabitants who have resisted and adapted to water scarcity, not only with the programs implemented by the government, but they also have implemented important structural analysis strategies. The objective of this research is to know the adaptation strategies used by the community to analyze them and propose improvements for water conservation and mitigation of this scarcity.Keywords: adaptation, climate change, mitigation, resiliencia
Procedia PDF Downloads 9714292 Space Debris Mitigation: Solutions from the Dark Skies of the Remote Australian Outback Using a Proposed Network of Mobile Astronomical Observatories
Authors: Muhammad Akbar Hussain, Muhammad Mehdi Hussain, Waqar Haider
Abstract:
There are tens of thousands of undetected and uncatalogued pieces of space debris in the Low Earth Orbit (LEO). They are not only difficult to be detected and tracked, their sheer number puts active satellites and humans in orbit around Earth into danger. With the entry of more governments and private companies into harnessing the Earth’s orbit for communication, research and military purposes, there is an ever-increasing need for not only the detection and cataloguing of these pieces of space debris, it is time to take measures to take them out and clean up the space around Earth. Current optical and radar-based Space Situational Awareness initiatives are useful mostly in detecting and cataloguing larger pieces of debris mainly for avoidance measures. Smaller than 10 cm pieces are in a relatively dark zone, yet these are deadly and capable of destroying satellites and human missions. A network of mobile observatories, connected to each other in real time and working in unison as a single instrument, may be able to detect small pieces of debris and achieve effective triangulation to help create a comprehensive database of their trajectories and parameters to the highest level of precision. This data may enable ground-based laser systems to help deorbit individual debris. Such a network of observatories can join current efforts in detection and removal of space debris in Earth’s orbit.Keywords: space debris, low earth orbit, mobile observatories, triangulation, seamless operability
Procedia PDF Downloads 16714291 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources
Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin
Abstract:
Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization
Procedia PDF Downloads 12114290 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: data mining, k-means, road traffic accidents, Waze, Weka
Procedia PDF Downloads 41814289 Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States
Authors: Rita U. Onolemhemhen, Saheed L. Bello, Akin P. Iwayemi
Abstract:
The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.Keywords: economic growth, energy demand, income, real GDP, urbanization, VECM
Procedia PDF Downloads 31214288 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design
Procedia PDF Downloads 17814287 Ultrasonic Techniques to Characterize and Monitor Water-in-Oil Emulsion
Authors: E. A. Alshaafi, A. Prakash
Abstract:
Oil-water emulsions are commonly encountered in various industrial operations and at different stages of crude oil production and processing. Emulsions are often difficult to track and treat and can cause a number of costly problems which need to be avoided. The characteristics of the emulsion phase can vary with crude composition and types of impurities present in oil. The objectives of this study are the development of ultrasonic techniques to track and characterize emulsion phase generated during production and cleaning of crude oil. The position of emulsion layer is monitored with the help of ultrasonic probes suitably placed in the vessel. The sensitivity of the technique and its potential has been demonstrated based on extensive testing with different oil samples. The technique is also being developed to monitor emulsion phase characteristics such as stability, composition, and droplet size distribution. The ultrasonic parameters recorded are changes in acoustic velocity, signal attenuation and its frequency spectrum. Emulsion has been prepared with light mineral oil sample and the effects of various factors including mixing speed, temperature, surfactant, and solid particles concentrations have been investigated. The applied frequency for ultrasonic waves has been varied from 1 to 5 MHz to carry out a sensitivity analysis. Emulsion droplet structure is observed with optical microscopy and stability is examined by tracking the changes in ultrasonic parameters with time. A model based on ultrasonic attenuation spectroscopy is being developed and tested to track changes in droplet size distribution with time.Keywords: ultrasonic techniques, emulsion, characterization, droplet size
Procedia PDF Downloads 17514286 Comprehensive Geriatric Assessments: An Audit into Assessing and Improving Uptake on Geriatric Wards at King’s College Hospital, London
Authors: Michael Adebayo, Saheed Lawal
Abstract:
The Comprehensive Geriatric Assessment (CGA) is the multidimensional tool used to assess elderly, frail patients either on admission to hospital care or at a community level in primary care. It is a tool designed with the aim of using a holistic approach to managing patients. A Cochrane review of CGA use in 2011 found that the likelihood of being alive and living in their own home rises by 30% post-discharge. RCTs have also discovered 10–15% reductions in readmission rates and reductions in institutionalization, and resource use and costs. Past audit cycles at King’s College Hospital, Denmark Hill had shown inconsistent evidence of CGA completion inpatient discharge summaries (less than 50%). Junior Doctors in the Health and Ageing (HAU) wards have struggled to sustain the efforts of past audit cycles due to the quick turnover in staff (four-month placements for trainees). This 7th cycle created a multi-faceted approach to solving this problem amongst staff and creating lasting change. Methods: 1. We adopted multidisciplinary team involvement to support Doctors. MDT staff e.g. Nurses, Physiotherapists, Occupational Therapists and Dieticians, were actively encouraged to fill in the CGA document. 2. We added a CGA Document Pro-forma to “Sunrise EPR” (Trust computer system). These CGAs were to automatically be included the discharge summary. 3. Prior to assessing uptake, we used a spot audit questionnaire to assess staff awareness/knowledge of what a CGA was. 4. We designed and placed posters highlighting domains of CGA and MDT roles suited to each domain on geriatric “Health and Ageing Wards” (HAU) in the hospital. 5. We performed an audit of % discharge summaries which include CGA and MDT role input. 6. We nominated ward champions on each ward from each multidisciplinary specialty to monitor and encourage colleagues to actively complete CGAs. 7. We initiated further education of ward staff on CGA's importance by discussion at board rounds and weekly multidisciplinary meetings. Outcomes: 1. The majority of respondents to our spot audit were aware of what a CGA was, but fewer had used the EPR document to complete one. 2. We found that CGAs were not being commenced for nearly 50% of patients discharged on HAU wards and the Frailty Assessment Unit.Keywords: comprehensive geriatric assessment, CGA, multidisciplinary team, quality of life, mortality
Procedia PDF Downloads 8514285 Signal Amplification Using Graphene Oxide in Label Free Biosensor for Pathogen Detection
Authors: Agampodi Promoda Perera, Yong Shin, Mi Kyoung Park
Abstract:
The successful detection of pathogenic bacteria in blood provides important information for early detection, diagnosis and the prevention and treatment of infectious diseases. Silicon microring resonators are refractive-index-based optical biosensors that provide highly sensitive, label-free, real-time multiplexed detection of biomolecules. We demonstrate the technique of using GO (graphene oxide) to enhance the signal output of the silicon microring optical sensor. The activated carboxylic groups in GO molecules bind directly to single stranded DNA with an amino modified 5’ end. This conjugation amplifies the shift in resonant wavelength in a real-time manner. We designed a capture probe for strain Staphylococcus aureus of 21 bp and a longer complementary target sequence of 70 bp. The mismatched target sequence we used was of Streptococcus agalactiae of 70 bp. GO is added after the complementary binding of the probe and target. GO conjugates to the unbound single stranded segment of the target and increase the wavelength shift on the silicon microring resonator. Furthermore, our results show that GO could successfully differentiate between the mismatched DNA sequences from the complementary DNA sequence. Therefore, the proposed concept could effectively enhance sensitivity of pathogen detection sensors.Keywords: label free biosensor, pathogenic bacteria, graphene oxide, diagnosis
Procedia PDF Downloads 46814284 An Historical Revision of Change and Configuration Management Process
Authors: Expedito Pinto De Paula Junior
Abstract:
Current systems such as artificial satellites, airplanes, automobiles, turbines, power systems and air traffic controls are becoming increasingly more complex and/or highly integrated as defined in SAE-ARP-4754A (Society Automotive Engineering - Certification considerations for highly-integrated or complex aircraft systems standard). Among other processes, the development of such systems requires careful Change and Configuration Management (CCM) to establish and maintain product integrity. Understand the maturity of CCM process based in historical approach is crucial for better implementation in hardware and software lifecycle. The sense of work organization, in all fields of development is directly related to the order and interrelation of the parties, changes in time, and record of these changes. Generally, is observed that engineers, administrators and managers invest more time in technical activities than in organization of work. More these professionals are focused in solving complex problems with a purely technical bias. CCM process is fundamental for development, production and operation of new products specially in the safety critical systems. The objective of this paper is open a discussion about the historical revision based in standards focus of CCM around the world in order to understand and reflect the importance across the years, the contribution of this process for technology evolution, to understand the mature of organizations in the system lifecycle project and the benefits of CCM to avoid errors and mistakes during the Lifecycle Product.Keywords: changes, configuration management, historical, revision
Procedia PDF Downloads 20114283 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM
Authors: Clement Leroy, Guillaume Boitel
Abstract:
This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery
Procedia PDF Downloads 20514282 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis
Procedia PDF Downloads 59214281 The Effects of Different Types of Cement on the Permeability of Deep Mixing Columns
Authors: Mojebullah Wahidy, Murat Olgun
Abstract:
In this study, four different types of cement are used to investigate the permeability of DMC (Deep Mixing Column) in the clay. The clay used in this research is in the kaolin group, and the types of cement are; CEM I 42.5.R. normal portland cement, CEM II/A-M (P-L) pozzolan doped cement, CEM III/A 42.5 N blast furnace slag cement and DMFC-800 fine-grained portland cement. Firstly, some rheological tests are done on every cement, and a 0.9 water/cement ratio is selected as the appropriate ratio. This ratio is used to prepare the small-scale DMCs for all types of cement with %6, %9, %12, and %15, which are determined as the dry weight of the clay. For all the types of cement, three samples were prepared in every percentage and were kept on curing for 7, 14, and 28 days for permeability tests. As a result of the small-scale DMCs, permeability tests, a %12 selected for big-scale DMCs. A total of five big scales DMC were prepared by using a %12-cement and were kept for 28 days curing for permeability tests. The results of the permeability tests show that by increasing the cement percentage and curing time of all DMCs, the permeability coefficient (k) is decreased. Despite variable results in different cement ratios and curing time in general, samples treated by DMFC-800 fine-grained cement have the lowest permeability coefficient. Samples treated with CEM II and CEM I cement types were the second and third lowest permeable samples. The highest permeability coefficient belongs to the samples that were treated with CEM III cement type.Keywords: deep mixing column, rheological test, DMFC-800, permeability test
Procedia PDF Downloads 7814280 Design of Aesthetic Acoustic Metamaterials Window Panel Based on Sierpiński Fractal Triangle for Sound-silencing with Free Airflow
Authors: Sanjeet Kumar Singh, Shanatanu Bhattacharaya
Abstract:
Design of high- efficiency low, frequency (<1000Hz) soundproof window or wall absorber which is transparent to airflow is presented. Due to the massive rise in human population and modernization, environmental noise has significantly risen globally. Prolonged noise exposure can cause severe physiological and psychological symptoms like nausea, headaches, fatigue, and insomnia. There has been continuous growth in building construction and infrastructure like offices, bus stops, and airports due to urban population. Generally, a ventilated window is used for getting fresh air into the room, but at the same time, unwanted noise comes along. Researchers used traditional approaches like noise barrier mats in front of the window or designed the entire window using sound-absorbing materials. However, this solution is not aesthetically pleasing, and at the same time, it's heavy and not adequate for low-frequency noise shielding. To address this challenge, we design a transparent hexagonal panel based on Sierpiński fractal triangle, which is aesthetically pleasing, demonstrates normal incident sound absorption coefficient more than 0.96 around 700 Hz and transmission loss around 23 dB while maintaining e air circulation through triangular cutout. Next, we present a concept of fabrication of large acoustic panel for large-scale applications, which lead to suppressing the urban noise pollution.Keywords: acoustic metamaterials, noise, functional materials, ventilated
Procedia PDF Downloads 8214279 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 11314278 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People
Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman
Abstract:
The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired
Procedia PDF Downloads 13414277 Jejunostomy and Protective Ileostomy in a Patient with Massive Necrotizing Enterocolitis: A Case Report
Authors: Rafael Ricieri, Rogerio Barros
Abstract:
Objective: This study is to report a case of massive necrotizing enterocolitis in a six-month-old patient, requiring ileostomy and protective jejunostomy as a damage control measure in the first exploratory laparotomy surgery in massive enterocolitis without a previous diagnosis. Methods: This study is a case report of success in making and closing a protective jejunostomy. However, the low number of publications on this staged and risky measure of surgical resolution encouraged the team to study the indication and especially the correct time for closing the patient's protective jejunostomy. The main study instrument will be the six-month-old patient's medical record. Results: Based on the observation of the case described, it was observed that the time for the closure of the described procedure (protective jejunostomy) varies according to the level of compromise of the health status of your patient and of an individual of each person. Early closure, or failure to close, can lead to a favorable problem for the patient since several problems can result from this closure, such as new intestinal perforations, hydroelectrolyte disturbances. Despite the risk of new perforations, we suggest closing the protective jejunostomy around the 14th day of the procedure, thus keeping the patient on broad-spectrum antibiotic therapy and absolute fasting, thus reducing the chances of new intestinal perforations. Associated with the closure of the jejunostomy, a gastric tube for decompression is necessary, and care in an intensive care unit and electrolyte replacement is necessary to maintain the stability of the case.Keywords: jejunostomy, ileostomy, enterocolitis, pediatric surgery, gastric surgery
Procedia PDF Downloads 8414276 Strategies for Good Governance during Crisis in Higher Education
Authors: Naziema B. Jappie
Abstract:
Over the last 23 years leaders in government, political parties and universities have been spending much time on identifying and discussing various gaps in the system that impact systematically on students especially those from historically Black communities. Equity and access to higher education were two critical aspects that featured in achieving the transformation goals together with a funding model for those previously disadvantaged. Free education was not a feasible option for the government. Institutional leaders in higher education face many demands on their time and resources. Often, the time for crisis management planning or consideration of being proactive and preventative is not a standing agenda item. With many issues being priority in academia, people become complacent and think that crisis may not affect them or they will cross the bridge when they get to it. Historically South Africa has proven to be a country of militancy, strikes and protests in most industries, some leading to disastrous outcomes. Higher education was not different between October 2015 and late 2016 when the #Rhodes Must Fall which morphed into the # Fees Must Fall protest challenged the establishment, changed the social fabric of universities, bringing the sector to a standstill. Some institutional leaders and administrators were better at handling unexpected, high-consequence situations than others. At most crisis leadership is viewed as a situation more than a style of leadership which is usually characterized by crisis management. The objective of this paper is to show how institutions managed catastrophes of disastrous proportions, down through unexpected incidents of 2015/2016. The content draws on the vast past crisis management experience of the presenter and includes the occurrences of the recent protests giving an event timeline. Using responses from interviews with institutional leaders and administrators as well as students will ensure first-hand information on their experiences and the outcomes. Students have tasted the power of organized action and they demand immediate change, if not the revolt will continue. This paper will examine the approaches that guided institutional leaders and their crisis teams and sector crisis response. It will further expand on whether the solutions effectively changed governance in higher education or has it minimized the need for more protests. The conclusion will give an insight into the future of higher education in South Africa from a leadership perspective.Keywords: crisis, governance, intervention, leadership, strategies, protests
Procedia PDF Downloads 147