Search results for: open source data
28318 Bio-Electro Chemical Catalysis: Redox Interactions, Storm and Waste Water Treatment
Authors: Michael Radwan Omary
Abstract:
Context: This scientific innovation demonstrate organic catalysis engineered media effective desalination of surface and groundwater. The author has developed a technology called “Storm-Water Ions Filtration Treatment” (SWIFTTM) cold reactor modules designed to retrofit typical urban street storm drains or catch basins. SWIFT triggers biochemical redox reactions with water stream-embedded toxic total dissolved solids (TDS) and electrical conductivity (EC). SWIFTTM Catalysts media unlock the sub-molecular bond energy, break down toxic chemical bonds, and neutralize toxic molecules, bacteria and pathogens. Research Aim: This research aims to develop and design lower O&M cost, zero-brine discharge, energy input-free, chemical-free water desalination and disinfection systems. The objective is to provide an effective resilient and sustainable solution to urban storm-water and groundwater decontamination and disinfection. Methodology: We focused on the development of organic, non-chemical, no-plugs, no pumping, non-polymer and non-allergenic approaches for water and waste water desalination and disinfection. SWIFT modules operate by directing the water stream to flow freely through the electrically charged media cold reactor, generating weak interactions with a water-dissolved electrically conductive molecule, resulting in the neutralization of toxic molecules. The system is powered by harvesting sub-molecular bonds embedded in energy. Findings: The SWIFTTM Technology case studies at CSU-CI and CSU-Fresno Water Institute, demonstrated consistently high reduction of all 40 detected waste-water pollutants including pathogens to levels below a state of California Department of Water Resources “Drinking Water Maximum Contaminants Levels”. The technology has proved effective in reducing pollutants such as arsenic, beryllium, mercury, selenium, glyphosate, benzene, and E. coli bacteria. The technology has also been successfully applied to the decontamination of dissolved chemicals, water pathogens, organic compounds and radiological agents. Theoretical Importance: SWIFT technology development, design, engineering, and manufacturing, offer cutting-edge advancement in achieving clean-energy source bio-catalysis media solution, an energy input free water and waste water desalination and disinfection. A significant contribution to institutions and municipalities achieving sustainable, lower cost, zero-brine and zero CO2 discharges clean energy water desalination. Data Collection and Analysis Procedures: The researchers collected data on the performance of the SWIFTTM technology in reducing the levels of various pollutants in water. The data was analyzed by comparing the reduction achieved by the SWIFTTM technology to the Drinking Water Maximum Contaminants Levels set by the state of California. The researchers also conducted live oral presentations to showcase the applications of SWIFTTM technology in storm water capture and decontamination as well as providing clean drinking water during emergencies. Conclusion: The SWIFTTM Technology has demonstrated its capability to effectively reduce pollutants in water and waste water to levels below regulatory standards. The Technology offers a sustainable solution to groundwater and storm-water treatments. Further development and implementation of the SWIFTTM Technology have the potential to treat storm water to be reused as a new source of drinking water and an ambient source of clean and healthy local water for recharge of ground water.Keywords: catalysis, bio electro interactions, water desalination, weak-interactions
Procedia PDF Downloads 6728317 Cross-border Data Transfers to and from South Africa
Authors: Amy Gooden, Meshandren Naidoo
Abstract:
Genetic research and transfers of big data are not confined to a particular jurisdiction, but there is a lack of clarity regarding the legal requirements for importing and exporting such data. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa
Procedia PDF Downloads 12528316 Evaluation to Assess the Impact of Newcastle Infant Partnership Approach
Authors: Samantha Burns, Melissa Brown, Judith Rankin
Abstract:
Background: As a specialised intervention, NEWPIP provides a service which supports both parents and their babies from conception to two years, who are experiencing issues which may affect the quality of their relationship and development of the infant. This evaluation of the NEWPIP approach was undertaken in response to the need for rich, in-depth data to understand the lived experiences of the parents who experienced the service to improve the service. NEWPIP is currently one of 34 specialised parent–infant relationship teams across England. This evaluation contributes to increasing understanding of the impact and effectiveness of this specialised service to inform future practice. Aim: The aim of this evaluation was to explore the perspectives and experiences of parents or caregivers (service users), to assess the impact of the NEWPIP service on the parents themselves and the relationship with their baby. Methods: The exploratory nature of the aim and focus on service users’ experience and perspectives provided scope for a qualitative approach for this evaluation. This consisted of 10 semi-structured interviews with parents who had received the service within the last two years. Recruitment involved both purposive and convenience sampling. The interviews took place between February 2021 – March 2021, lasting between 30-90 minutes and were guided by open-ended questions from a topic guide. The interviews adopted a narrative approach to enable the parents to share their lived experiences. The researchers transcribed the interviews and analysed the data thematically by using a coding method which is grounded in the data. Results: The analysis and findings from the data gathered illuminated an approach which supports parents to build a better bond with their baby and provides a safe space for parents to heal through their relationships. While the parents shared their experiences, the interviews were intended to receive feedback, so questions were asked about what could be improved and what recommendations could be offered to Children North East. Guided by the voice of the parents, this evaluation provides recommendations to support the future of the NEWPIP approach. Conclusions: The NEWPIP approach appears to successfully provide early and flexible support for new parents, increasing a parent’s confidence in their ability to not only cope but thrive as a new parent.Keywords: maternal health, mental health, parent infant relationship, therapy
Procedia PDF Downloads 19228315 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 8628314 Psychological Factors Affecting Breastfeeding: An Exploratory Study among Breastfeeding Moms
Authors: Marwa Abdussalam
Abstract:
Breastfeeding is a unique emotional bond between a mother and their offspring. Though breastfeeding may be natural, it is not something mothers are born with; some still struggle to breastfeed their babies. Various factors can influence the breastfeeding experience, such as the mode of delivery, the mother’s health condition, proper latching, etc. In addition, psychological factors have been known to influence breastfeeding ability, duration, and milk supply. Some mothers struggle to breastfeed their babies because they perceive they have a low milk supply and or don’t have the ability to breastfeed their babies. Most of these perceptions result either from their own past experience or from the ‘comments’ of their caregivers. So, it is of utmost essential to understand such psychological factors affecting breastfeeding so that necessary steps can be taken to educate breastfeeding mothers. The study explored the role of psychological factors that affect breastfeeding. Data were collected from fifteen breastfeeding mothers using a semi-structured interview schedule. A total of 10 questions were included in the interview schedule. Questions were sequenced in a funnel pattern, beginning with open-ended questions and then moving on to close-ended questions. Data were analyzed using Braun and Clarke’s Thematic Analysis technique. This technique involves identifying the codes, generating themes, naming them, and finally reviewing them. Results indicated that breastfeeding self-efficacy perceived insufficient milk supply, and lack of knowledge were the psychological factors affecting breastfeeding. The results of this study can be used to help mothers who are struggling with breastfeeding by developing interventions aimed at improving breastfeeding self-efficacy.Keywords: breastfeeding, breastfeeding self-efficacy, perceived insufficient milk supply, Thematic Analysis
Procedia PDF Downloads 10828313 3G or 4G: A Predilection for Millennial Generation of Indian Society
Authors: Rishi Prajapati
Abstract:
3G is the abbreviation of third generation of wireless mobile telecommunication technologies. 3G is a mode that finds application in wireless voice telephony, mobile internet access, fixed wireless internet access, video calls and mobile TV. It also provides mobile broadband access to smartphones and mobile modems in laptops and computers. The first 3G networks were introduced in 1998, followed by 4G networks in 2008. 4G is the abbreviation of fourth generation of wireless mobile telecommunication technologies. 4G is termed to be the advanced form of 3G. 4G was firstly introduced in South Korea in 2007. Many abstracts have floated researches that depicted the diversity and similarity between the third and the fourth generation of wireless mobile telecommunications technology, whereas this abstract reflects the study that focuses on analyzing the preference between 3G versus 4G given by the elite group of the Indian society who are known as adolescents or the Millennial Generation aging from 18 years to 25 years. The Millennial Generation was chosen for this study as they have the easiest access to the latest technology. A sample size of 200 adolescents was selected and a structured survey was carried out which had several closed ended as well as open ended questions, to aggregate the result of this study. It was made sure that the effect of environmental factors on the subjects was as minimal as possible. The data analysis comprised of primary data collection reflecting it as quantitative research. The rationale behind this research is to give brief idea of how 3G and 4G are accepted by the Millennial Generation in India. The findings of this research would materialize a framework which depicts whether Millennial Generation would prefer 4G over 3G or vice versa.Keywords: fourth generation, wireless telecommunication technology, Indian society, millennial generation, market research, third generation
Procedia PDF Downloads 26928312 The Study of Security Techniques on Information System for Decision Making
Authors: Tejinder Singh
Abstract:
Information system is the flow of data from different levels to different directions for decision making and data operations in information system (IS). Data can be violated by different manner like manual or technical errors, data tampering or loss of integrity. Security system called firewall of IS is effected by such type of violations. The flow of data among various levels of Information System is done by networking system. The flow of data on network is in form of packets or frames. To protect these packets from unauthorized access, virus attacks, and to maintain the integrity level, network security is an important factor. To protect the data to get pirated, various security techniques are used. This paper represents the various security techniques and signifies different harmful attacks with the help of detailed data analysis. This paper will be beneficial for the organizations to make the system more secure, effective, and beneficial for future decisions making.Keywords: information systems, data integrity, TCP/IP network, vulnerability, decision, data
Procedia PDF Downloads 30728311 Data Integration with Geographic Information System Tools for Rural Environmental Monitoring
Authors: Tamas Jancso, Andrea Podor, Eva Nagyne Hajnal, Peter Udvardy, Gabor Nagy, Attila Varga, Meng Qingyan
Abstract:
The paper deals with the conditions and circumstances of integration of remotely sensed data for rural environmental monitoring purposes. The main task is to make decisions during the integration process when we have data sources with different resolution, location, spectral channels, and dimension. In order to have exact knowledge about the integration and data fusion possibilities, it is necessary to know the properties (metadata) that characterize the data. The paper explains the joining of these data sources using their attribute data through a sample project. The resulted product will be used for rural environmental analysis.Keywords: remote sensing, GIS, metadata, integration, environmental analysis
Procedia PDF Downloads 12028310 Risk Assessment in Construction of K-Span Buildings in United Arab Emirates (UAE)
Authors: Imtiaz Ali, Imam Mansoor
Abstract:
Investigations as a part of the academic study were undertaken to identify and evaluate the significant risks associated with the construction of K-span buildings in the region of UAE. Primary field data was collected through questionnaires obtaining specific open and close-ended questions from carefully selected construction firms, civil engineers and, construction manager regarding risks associated to K-span building construction. Historical data available for other regions of the same construction technique was available which was compared for identifying various non-critical and critical risk parameters by comparative evaluation techniques to come up with important risks and potential sources for their control and minimization in K-Span buildings that is increasing in the region. The associated risks have been determined with their Relative Importance Index (RII) values of which Risk involved in Change of Design required by Owners carries the highest value (RII=0.79) whereas, Delayed Payment by Owner to Contractor is one of the least (RII=0.42) value. The overall findings suggest that most relative risks as quantified originate or associated with the contractors. It may be concluded that project proponents undertaking K-span projects in planning and budgeting the cost and delays should take into account of risks on high account if changes in design are also required any delays in the material by the supplier would then be a major risk in K-span project delay. Since projects are, less costly, so owners have limited budgets, then they hire small contractors, which are not highly competent contractors. So study suggests that owner should be aware of these types of risks associated with the construction of K-span buildings in order to make it cost effective.Keywords: k-span buildings, k-span construction, risk management, relative improvement index (RII)
Procedia PDF Downloads 37528309 Prioritizing the Most Important Information from Contractors’ BIM Handover for Firefighters’ Responsibilities
Authors: Akram Mahdaviparsa, Tamera McCuen, Vahideh Karimimansoob
Abstract:
Fire service is responsible for protecting life, assets, and natural resources from fire and other hazardous incidents. Search and rescue in unfamiliar buildings is a vital part of firefighters’ responsibilities. Providing firefighters with precise building information in an easy-to-understand format is a potential solution for mitigating the negative consequences of fire hazards. The negative effect of insufficient knowledge about a building’s indoor environment impedes firefighters’ capabilities and leads to lost property. A data rich building information modeling (BIM) is a potentially useful source in three-dimensional (3D) visualization and data/information storage for fire emergency response. Therefore, this research’s purpose is prioritizing the required information for firefighters from the most important information to the least important. A survey was carried out with firefighters working in the Norman Fire Department to obtain the importance of each building information item. The results show that “the location of exit doors, windows, corridors, elevators, and stairs”, “material of building elements”, and “building data” are the three most important information specified by firefighters. The results also implied that the 2D model of architectural, structural and way finding is more understandable in comparison with the 3D model, while the 3D model of MEP system could convey more information than the 2D model. Furthermore, color in visualization can help firefighters to understand the building information easier and quicker. Sufficient internal consistency of all responses was proven through developing the Pearson Correlation Matrix and obtaining Cronbach’s alpha of 0.916. Therefore, the results of this study are reliable and could be applied to the population.Keywords: BIM, building fire response, ranking, visualization
Procedia PDF Downloads 13328308 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes
Authors: Taylor Harris
Abstract:
Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review
Procedia PDF Downloads 8628307 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic
Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi
Abstract:
In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing
Procedia PDF Downloads 29928306 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data
Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin
Abstract:
Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.Keywords: big data, machine learning, ontology model, urban data model
Procedia PDF Downloads 41828305 Potential Ecological Risk Assessment of Selected Heavy Metals in Sediments of Tidal Flat Marsh, the Case Study: Shuangtai Estuary, China
Authors: Chang-Fa Liu, Yi-Ting Wang, Yuan Liu, Hai-Feng Wei, Lei Fang, Jin Li
Abstract:
Heavy metals in sediments can cause adverse ecological effects while it exceeds a given criteria. The present study investigated sediment environmental quality, pollutant enrichment, ecological risk, and source identification for copper, cadmium, lead, zinc, mercury, and arsenic in the sediments collected from tidal flat marsh of Shuangtai estuary, China. The arithmetic mean integrated pollution index, geometric mean integrated pollution index, fuzzy integrated pollution index, and principal component score were used to characterize sediment environmental quality; fuzzy similarity and geo-accumulation Index were used to evaluate pollutant enrichment; correlation matrix, principal component analysis, and cluster analysis were used to identify source of pollution; environmental risk index and potential ecological risk index were used to assess ecological risk. The environmental qualities of sediment are classified to very low degree of contamination or low contamination. The similar order to element background of soil in the Liaohe plain is region of Sanjiaozhou, Honghaitan, Sandaogou, Xiaohe by pollutant enrichment analysis. The source identification indicates that correlations are significantly among metals except between copper and cadmium. Cadmium, lead, zinc, mercury, and arsenic will be clustered in the same clustering as the first principal component. Copper will be clustered as second principal component. The environmental risk assessment level will be scaled to no risk in the studied area. The order of potential ecological risk is As > Cd > Hg > Cu > Pb > Zn.Keywords: ecological risk assessment, heavy metals, sediment, marsh, Shuangtai estuary
Procedia PDF Downloads 34728304 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 6828303 Tectogenesis Around Kalaat Es Senan, Northwest of Tunisia: Structural, Geophysical and Gravimetric Study
Authors: Amira Rjiba, Mohamed Ghanmi, Tahar Aifa, Achref Boulares
Abstract:
This study, involving the interpretation of geological outcrops data (structures, and lithostratigraphiec colones) and subsurface structures (seismic and gravimetric data) help us to identify and precise (i) the lithology of the sedimentary formations between the Aptian and the recent formations, (ii) to differentiate the sedimentary formations it from the salt-bearing Triassic (iii) and to specify the major structures though the tectonics effects having affected the region during its geological evolution. By placing our study area placed in the context of Tunisia, located on the southern margin of the Tethys show us through tectonic traces and structural analysis conducted, that this area was submitted during the Triassic perio at an active rifting triggered extensional tectonic events and extensive respectively in the Cretaceous and Paleogene. Lithostratigraphic correlations between outcrops and seismic data sets on those of six oil wells conducted in the region have allowed us to better understand the structural complexity and the role of different tectonic faults having contributed to the current configuration, and marked by the current rifts. Indeed, three directions of NW-SE faults, NNW-SSE to NS and NE-SW to EW had a major role in the genesis of folds and open ditches collapse of NW-SE direction. These results were complemented by seismic reflection data to clarify the geometry of the southern and western areas of Kalaa Khasba ditch. The eight selected seismic lines for this study allowed to characterize the main structures, with isochronous maps, contour and isovitesse of Serdj horizon that presents the main reservoir in the region. The line L2, keyed by the well 6, helped highlight the NW-SE compression that has resulted in persistent discrepancies widely identifiable in its lithostratigraphic column. The gravity survey has confirmed the extension of most of the accidents deep subsurface whose activity seems to go far. Gravimetry also reinforced seismic interpretation confirming, at the L2 well, that both SW and NE flank of the moat are two opposite faults and trace the boundaries of NNW-SSE direction graben whose sedimentation of Mio-Pliocene age and Quaternary.Keywords: graben, graben collapse, gravity, Kalat Es Senan, seismic, tectogenesis
Procedia PDF Downloads 36728302 Surgical Hip Dislocation of Femoroacetabular Impingement: Survivorship and Functional Outcomes at 10 Years
Authors: L. Hoade, O. O. Onafowokan, K. Anderson, G. E. Bartlett, E. D. Fern, M. R. Norton, R. G. Middleton
Abstract:
Aims: Femoroacetabular impingement (FAI) was first recognised as a potential driver for hip pain at the turn of the last millennium. While there is an increasing trend towards surgical management of FAI by arthroscopic means, open surgical hip dislocation and debridement (SHD) remains the Gold Standard of care in terms of reported outcome measures. (1) Long-term functional and survivorship outcomes of SHD as a treatment for FAI are yet to be sufficiently reported in the literature. This study sets out to help address this imbalance. Methods: We undertook a retrospective review of our institutional database for all patients who underwent SHD for FAI between January 2003 and December 2008. A total of 223 patients (241 hips) were identified and underwent a ten year review with a standardised radiograph and patient-reported outcome measures questionnaire. The primary outcome measure of interest was survivorship, defined as progression to total hip arthroplasty (THA). Negative predictive factors were analysed. Secondary outcome measures of interest were survivorship to further (non-arthroplasty) surgery, functional outcomes as reflected by patient reported outcome measure scores (PROMS) scores, and whether a learning curve could be identified. Results: The final cohort consisted of 131 females and 110 males, with a mean age of 34 years. There was an overall native hip joint survival rate of 85.4% at ten years. Those who underwent a THA were significantly older at initial surgery, had radiographic evidence of preoperative osteoarthritis and pre- and post-operative acetabular undercoverage. In those whom had not progressed to THA, the average Non-arthritic Hip Score and Oxford Hip Score at ten year follow-up were 72.3% and 36/48, respectively, and 84% still deemed their surgery worthwhile. A learning curve was found to exist that was predicated on case selection rather than surgical technique. Conclusion: This is only the second study to evaluate the long-term outcomes (beyond ten years) of SHD for FAI and the first outside the originating centre. Our results suggest that, with correct patient selection, this remains an operation with worthwhile outcomes at ten years. How the results of open surgery compared to those of arthroscopy remains to be answered. While these results precede the advent of collison software modelling tools, this data helps set a benchmark for future comparison of other techniques effectiveness at the ten year mark.Keywords: femoroacetabular impingement, hip pain, surgical hip dislocation, hip debridement
Procedia PDF Downloads 8428301 Data-driven Decision-Making in Digital Entrepreneurship
Authors: Abeba Nigussie Turi, Xiangming Samuel Li
Abstract:
Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.Keywords: startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship
Procedia PDF Downloads 32828300 Digital Transformation and Environmental Disclosure in Industrial Firms: The Moderating Role of the Top Management Team
Authors: Yongxin Chen, Min Zhang
Abstract:
As industrial enterprises are the primary source of national pollution, environmental information disclosure is a crucial way to demonstrate to stakeholders the work they have done in fulfilling their environmental responsibilities and accepting social supervision. In the era of the digital economy, many companies, actively embracing the opportunities that come with digital transformation, have begun to apply digital technology to information collection and disclosure within the enterprise. However, less is known about the relationship between digital transformation and environmental disclosure. This study investigates how enterprise digital transformation affects environmental disclosure in 643 Chinese industrial companies, according to information processing theory. What is intriguing is that the depth (size) and breadth (diversity) of environmental disclosure linearly increase with the rise in the collection, processing, and analytical capabilities in the digital transformation process. However, the volume of data will grow exponentially, leading to a marginal increase in the economic and environmental costs of utilizing, storing, and managing data. In our empirical findings, linearly increasing benefits and marginal costs create a unique inverted U-shaped relationship between the degree of digital transformation and environmental disclosure in the Chinese industrial sector. Besides, based on the upper echelons theory, we also propose that the top management team with high stability and managerial capabilities will invest more effort and expense into improving environmental disclosure quality, lowering the carbon footprint caused by digital technology, maintaining data security etc. In both these contexts, the increasing marginal cost curves would become steeper, weakening the inverted U-shaped slope between DT and ED.Keywords: digital transformation, environmental disclosure, the top management team, information processing theory, upper echelon theory
Procedia PDF Downloads 14228299 Chaos Cryptography in Cloud Architectures with Lower Latency
Authors: Mohammad A. Alia
Abstract:
With the rapid evolution of the internet applications, cloud computing becomes one of today’s hottest research areas due to its ability to reduce costs associated with computing. Cloud is, therefore, increasing flexibility and scalability for computing services in the internet. Cloud computing is Internet based computing due to shared resources and information which are dynamically delivered to consumers. As cloud computing share resources via the open network, hence cloud outsourcing is vulnerable to attack. Therefore, this paper will explore data security of cloud computing by implementing chaotic cryptography. The proposal scenario develops a problem transformation technique that enables customers to secretly transform their information. This work proposes the chaotic cryptographic algorithms have been applied to enhance the security of the cloud computing accessibility. However, the proposed scenario is secure, easy and straightforward process. The chaotic encryption and digital signature systems ensure the security of the proposed scenario. Though, the choice of the key size becomes crucial to prevent a brute force attack.Keywords: chaos, cloud computing, security, cryptography
Procedia PDF Downloads 34528298 Anti-Anxiety Activity of Ethyl Acetate Extract of Flowers Nerium indicum
Authors: Deepak Suresh Mohale, Anil V. Chandewar
Abstract:
Anxiety is defined as an exaggerated feeling of apprehension, uncertainty and fear. Nerium indicum is a well-known ornamental and medicinal plant belonging to the family Apocynaceae. A wide spectrum of biological activities has been reported with various constituents isolated from different parts of the plant. This study was conducted to investigate antianxiety activity of flower extract. Flowers were collected and dried in shade and coarsely powdered. Powdered mixture was extracted with ethyl acetate by maceration process. Extract of flowers obtained was subsequently dried in oven at 40-50 °C. This extract is then tested for antianxiety activity at low and high dose using elevated plus maze and light & dark model. Rats shown increased open arm entries and time spent in open arm in elevated Plus maze with treatment low and high dose of extract of Nerium indicum flower as compared to their respective control groups. In Light & dark Model, light box entries and time spent in light box increased with treatment low and high dose of extract of Nerium indicum flower as compared to their respective control groups. From result it is concluded that ethyl acetate extract of flower of Nerium indicum possess antianxiety activity at low and high dose.Keywords: antianxiety, anxiety, kaner, nerium indicum, social isolation
Procedia PDF Downloads 39228297 Modeling of Timing in a Cyber Conflict to Inform Critical Infrastructure Defense
Authors: Brian Connett, Bryan O'Halloran
Abstract:
Systems assets within critical infrastructures were seemingly safe from the exploitation or attack by nefarious cyberspace actors. Now, critical infrastructure is a target and the resources to exploit the cyber physical systems exist. These resources are characterized in terms of patience, stealth, replication-ability and extraordinary robustness. System owners are obligated to maintain a high level of protection measures. The difficulty lies in knowing when to fortify a critical infrastructure against an impending attack. Models currently exist that demonstrate the value of knowing the attacker’s capabilities in the cyber realm and the strength of the target. The shortcomings of these models are that they are not designed to respond to the inherent fast timing of an attack, an impetus that can be derived based on open-source reporting, common knowledge of exploits of and the physical architecture of the infrastructure. A useful model will inform systems owners how to align infrastructure architecture in a manner that is responsive to the capability, willingness and timing of the attacker. This research group has used an existing theoretical model for estimating parameters, and through analysis, to develop a decision tool for would-be target owners. The continuation of the research develops further this model by estimating the variable parameters. Understanding these parameter estimations will uniquely position the decision maker to posture having revealed the vulnerabilities of an attacker’s, persistence and stealth. This research explores different approaches to improve on current attacker-defender models that focus on cyber threats. An existing foundational model takes the point of view of an attacker who must decide what cyber resource to use and when to use it to exploit a system vulnerability. It is valuable for estimating parameters for the model, and through analysis, develop a decision tool for would-be target owners.Keywords: critical infrastructure, cyber physical systems, modeling, exploitation
Procedia PDF Downloads 19228296 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling
Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal
Abstract:
In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing
Procedia PDF Downloads 15128295 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement
Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji
Abstract:
The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.Keywords: marginal productivity, DEA, production function, Kogi state
Procedia PDF Downloads 48228294 ICT for Smart Appliances: Current Technology and Identification of Future ICT Trend
Authors: Abubakar Uba Ibrahim, Ibrahim Haruna Shanono
Abstract:
Smart metering and demand response are gaining ground in industrial and residential applications. Smart Appliances have been given concern towards achieving Smart home. The success of Smart grid development relies on the successful implementation of Information and Communication Technology (ICT) in power sector. Smart Appliances have been the technology under development and many new contributions to its realization have been reported in the last few years. The role of ICT here is to capture data in real time, thereby allowing bi-directional flow of information/data between producing and utilization point; that lead a way for the attainment of Smart appliances where home appliances can communicate between themselves and provide a self-control (switch on and off) using the signal (information) obtained from the grid. This paper depicts the background on ICT for smart appliances paying a particular attention to the current technology and identifying the future ICT trends for load monitoring through which smart appliances can be achieved to facilitate an efficient smart home system which promote demand response program. This paper grouped and reviewed the recent contributions, in order to establish the current state of the art and trends of the technology, so that the reader can be provided with a comprehensive and insightful review of where ICT for smart appliances stands and is heading to. The paper also presents a brief overview of communication types, and then narrowed the discussion to the load monitoring (Non-intrusive Appliances Load Monitoring ‘NALM’). Finally, some future trends and challenges in the further development of the ICT framework are discussed to motivate future contributions that address open problems and explore new possibilities.Keywords: communication technology between appliances, demand response, load monitoring, smart appliances, smart grid
Procedia PDF Downloads 61328293 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 55228292 Application of a Lighting Design Method Using Mean Room Surface Exitance
Authors: Antonello Durante, James Duff, Kevin Kelly
Abstract:
The visual needs of people in modern work based buildings are changing. Self-illuminated screens of computers, televisions, tablets and smart phones have changed the relationship between people and the lit environment. In the past, lighting design practice was primarily based on providing uniform horizontal illuminance on the working plane, but this has failed to ensure good quality lit environments. Lighting standards of today continue to be set based upon a 100 year old approach that at its core, considers the task illuminance of the utmost importance, with this task typically being located on a horizontal plane. An alternative method focused on appearance has been proposed, as opposed to the traditional performance based approach. Mean Room Surface Exitance (MRSE) and Target-Ambient Illuminance Ratio (TAIR) are two new metrics proposed to assess illumination adequacy in interiors. The hypothesis is that these factors will be superior to the existing metrics used, which are horizontal illuminance led. For the six past years, research has examined this, within the Dublin Institute of Technology, with a view to determining the suitability of this approach for application to general lighting practice. Since the start of this research, a number of key findings have been produced that centered on how occupants will react to various levels of MRSE. This paper provides a broad update on how this research has progressed. More specifically, this paper will: i) Demonstrate how MRSE can be measured using HDR images technology, ii) Illustrate how MRSE can be calculated using scripting and an open source lighting computation engine, iii) Describe experimental results that demonstrate how occupants have reacted to various levels of MRSE within experimental office environments.Keywords: illumination hierarchy (IH), mean room surface exitance (MRSE), perceived adequacy of illumination (PAI), target-ambient illumination ratio (TAIR)
Procedia PDF Downloads 18728291 Routing in IP/LEO Satellite Communication Systems: Past, Present and Future
Authors: Mohammed Hussein, Abualseoud Hanani
Abstract:
In Low Earth Orbit (LEO) satellite constellation system, routing data from the source all the way to the destination constitutes a daunting challenge because LEO satellite constellation resources are spare and the high speed movement of LEO satellites results in a highly dynamic network topology. This situation limits the applicability of traditional routing approaches that rely on exchanging topology information upon change or setup of a connection. Consequently, in recent years, many routing algorithms and implementation strategies for satellite constellation networks with Inter Satellite Links (ISLs) have been proposed. In this article, we summarize and classify some of the most representative solutions according to their objectives, and discuss their advantages and disadvantages. Finally, with a look into the future, we present some of the new challenges and opportunities for LEO satellite constellations in general and routing protocols in particular.Keywords: LEO satellite constellations, dynamic topology, IP routing, inter-satellite-links
Procedia PDF Downloads 38128290 Assessing the Applicability of Kevin Lynch’s Framework of ‘the Image of the City’ in the Case of a Walled City of Jaipur
Authors: Jay Patel
Abstract:
This Research is about investigating the ‘image’ of the city, and asks whether this ‘image’ holds any significance that can be changed. Kevin Lynch in the book ‘The image of the city’ develops a framework that breaks down the city’s image into five physical elements. These elements (Paths, Edge, Nodes, Districts, and Landmarks), according to Lynch assess the legibility of the urbanscapes, that emerged from his perception-based study in 3 different cities (New Jersey, Los Angeles, and Boston) in the USA. The aim of this research is to investigate whether Lynch’s framework can be applied within an Indian context or not. If so, what are the possibilities and whether the imageability of Indian cities can be depicted through the Lynch’s physical elements or it demands an extension to the framework by either adding or subtracting a physical attribute. For this research project, the walled city of Jaipur was selected, as it is considered one of the futuristic designed cities of all time in India. The other significant reason for choosing Jaipur was that it is a historically planned city with solid historical, touristic and local importance; allowing an opportunity to understand the application of Lynch's elements to the city's image. In other words, it provides an opportunity to examine how the disadvantages of a city's implicit programme (its relics of bygone eras) can be converted into assets by improving the imageability of the city. To obtain data, a structured semi-open ended interview method was chosen. The reason for selecting this method explicitly was to gain qualitative data from the users rather than collecting quantitative data from closed-ended questions. This allowed in-depth understanding and applicability of Kevin Lynch’s framework while assessing what needs to be added. The interviews were conducted in Jaipur that yielded varied inferences that were different from the expected learning outcomes, highlighting the need for extension on Lynch’s physical elements to achieve city’s image. Whilst analyzing the data, there were few attributes found that defined the image of Jaipur. These were categorized into two: a Physical aspect (streets and arcade entities, natural features, temples and temporary/ informal activities) and Associational aspects (History, Culture and Tradition, Medium of help in wayfinding, and intangible aspects).Keywords: imageability, Kevin Lynch, people’s perception, assessment, associational aspects, physical aspects
Procedia PDF Downloads 19828289 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor
Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes
Abstract:
In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data
Procedia PDF Downloads 147