Search results for: open information extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14754

Search results for: open information extraction

12294 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video

Authors: Nidhal K. Azawi, John M. Gauch

Abstract:

Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.

Keywords: colonoscopy classification, feature extraction, image alignment, machine learning

Procedia PDF Downloads 249
12293 Production Sharing Contracts Transparency Simulation

Authors: Chariton Christou, David Cornwell

Abstract:

Production Sharing Contract (PSC) is the type of contract that is being used widely in our time. The financial crisis made the governments tightfisted and they do not have the resources to participate in a development of a field. Therefore, more and more countries introduce the PSC. The companies have the power and the money to develop the field with their own way. The main problem is the transparency of oil and gas companies especially in the PSC and how this can be achieved. Many discussions have been made especially in the U.K. What we are suggesting is a dynamic financial simulation with the help of a flow meter. The flow meter will count the production of each field every day (it will be installed in a pipeline). The production will be the basic input of the simulation. It will count the profit, the costs and more according to the information of the flow meter. In addition it will include the terms of the contract and the costs that have been paid. By all these parameters the simulation will be able to present in real time the information of a field (taxes, employees, R-factor). By this simulation the company will share some information with the government but not all of them. The government will know the taxes that should be paid and what is the sharing percentage of it. All of the other information could be confidential for the company. Furthermore, oil company could control the R-factor by changing the production each day to maximize its sharing percentages and as a result of this the profit. This idea aims to change the way that governments 'control' oil companies and bring a transparency evolution in the industry. With the help of a simulation every country could be next to the company and have a better collaboration.

Keywords: production sharing contracts, transparency, simulation

Procedia PDF Downloads 370
12292 In Vitro Studies on Antimicrobial Activities of Lactic Acid Bacteria Isolated from Fresh Fruits for Biocontrol of Pathogens

Authors: Okolie Pius Ifeanyi, Emerenini Emilymary Chima

Abstract:

Aims: The study investigated the diversity and identities of Lactic Acid Bacteria (LAB) isolated from different fresh fruits using Molecular Nested PCR analysis and the efficacy of cell free supernatants from Lactic Acid Bacteria (LAB) isolated from fresh fruits for in vitro control of some tomato pathogens. Study Design: Nested PCR approach was used in this study employing universal 16S rRNA gene primers in the first round PCR and LAB specific Primers in the second round PCR with the view of generating specific Nested PCR products for the LAB diversity present in the samples. The inhibitory potentials of supernatant obtained from LAB isolates of fruits origin that were molecularly characterized were investigated against some tomato phytopathogens using agar-well method with the view to develop biological agents for some tomato disease causing organisms. Methodology: Gram positive, catalase negative strains of LAB were isolated from fresh fruits on Man Rogosa and Sharpe agar (Lab M) using streaking method. Isolates obtained were molecularly characterized by means of genomic DNA extraction kit (Norgen Biotek, Canada) method. Standard methods were used for Nested Polymerase Chain Reaction (PCR) amplification targeting the 16S rRNA gene using universal 16S rRNA gene and LAB specific primers, agarose gel electrophoresis, purification and sequencing of generated Nested PCR products (Macrogen Inc., USA). The partial sequences obtained were identified by blasting in the non-redundant nucleotide database of National Center for Biotechnology Information (NCBI). The antimicrobial activities of characterized LAB against some tomato phytopathogenic bacteria which include (Xanthomonas campestries, Erwinia caratovora, and Pseudomonas syringae) were obtained by using the agar well diffusion method. Results: The partial sequences obtained were deposited in the database of National Centre for Biotechnology Information (NCBI). Isolates were identified based upon the sequences as Weissella cibaria (4, 18.18%), Weissella confusa (3, 13.64%), Leuconostoc paramensenteroides (1, 4.55%), Lactobacillus plantarum (8, 36.36%), Lactobacillus paraplantarum (1, 4.55%) and Lactobacillus pentosus (1, 4.55%). The cell free supernatants of LAB from fresh fruits origin (Weissella cibaria, Weissella confusa, Leuconostoc paramensenteroides, Lactobacillus plantarum, Lactobacillus paraplantarum and Lactobacillus pentosus) can inhibits these bacteria by creating clear zones of inhibition around the wells containing cell free supernatants of the above mentioned strains of lactic acid bacteria. Conclusion: This study shows that potentially LAB can be quickly characterized by molecular methods to specie level by nested PCR analysis of the bacteria isolate genomic DNA using universal 16S rRNA primers and LAB specific primer. Tomato disease causing organisms can be most likely biologically controlled by using extracts from LAB. This finding will reduce the potential hazard from the use of chemical herbicides on plant.

Keywords: nested pcr, molecular characterization, 16s rRNA gene, lactic acid bacteria

Procedia PDF Downloads 412
12291 Using Virtual Reality to Convey the Information of Food Supply Chain

Authors: Xinrong Li, Jiawei Dai

Abstract:

Food production, food safety, and the food supply chain are causing a great challenge to human health and the environment. Different kinds of food have different environmental costs. Therefore, a healthy diet can alleviate this problem to a certain extent. In this project, an online questionnaire was conducted to understand the purchase behaviour of consumers and their attitudes towards basic food information. However, the data shows that the public's current consumption habits and ideology do not meet the long-term development of sustainable social needs. In order to solve the environmental problems caused by the unbalanced diet of the public and the social problems of unequal food distribution, the purpose of this paper is to explore how to use the emerging media of VR to visualize food supply chain information so as to attract users' attention to the environmental cost of food. In this project, the food supply chain of imported and local cheese was compared side-by-side in the virtual reality environment, including the origin, transportation, sales, and other processes, which can effectively help users understand the difference between the two processes and environmental costs. Besides, the experimental data demonstrated that the participant would like to choose low environmental cost food after experiencing the whole process.

Keywords: virtual reality, information design, food supply chain, environmental cost

Procedia PDF Downloads 95
12290 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence

Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy

Abstract:

Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.

Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows

Procedia PDF Downloads 144
12289 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network

Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello

Abstract:

Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.

Keywords: Internet of Things, LoRa, LoRaWAN, smart cities

Procedia PDF Downloads 142
12288 Low-Impact Development Strategies Assessment for Urban Design

Authors: Y. S. Lin, H. L. Lin

Abstract:

Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.

Keywords: low-impact development, urban design, flooding, density measures

Procedia PDF Downloads 328
12287 Development and Implementation of Curvature Dependent Force Correction Algorithm for the Planning of Forced Controlled Robotic Grinding

Authors: Aiman Alshare, Sahar Qaadan

Abstract:

A curvature dependent force correction algorithm for planning force controlled grinding process with off-line programming flexibility is designed for ABB industrial robot, in order to avoid the manual interface during the process. The machining path utilizes a spline curve fit that is constructed from the CAD data of the workpiece. The fitted spline has a continuity of the second order to assure path smoothness. The implemented algorithm computes uniform forces normal to the grinding surface of the workpiece, by constructing a curvature path in the spatial coordinates using the spline method.

Keywords: ABB industrial robot, grinding process, offline programming, CAD data extraction, force correction algorithm

Procedia PDF Downloads 359
12286 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 133
12285 The Effect of Peripheral Fatigue and Visual Feedback on Postural Control and Strength in Obese People

Authors: Elham Azimzadeh, Saeedeh Sepehri, Hamidollah Hassanlouei

Abstract:

Obesity is associated with postural instability, might influence the quality of daily life, and could be considered a potential factor for falling in obese people. The fat body mass especially in the abdominal area may increase body sway. Furthermore, loss of visual feedback may induce a larger postural sway in obese people. Moreover, Muscle fatigue may impair the work capacity of the skeletal muscle and may alter joint proprioception. So, the purpose of this study was to investigate the effect of physical fatigue and visual feedback on body sway and strength of lower extremities in obese people. 12 obese (4 female, 8 male; BMI >30 kg/m2), and 12 normal weight (4 female, 8 male; BMI: 20-25 kg/m2) subjects aged 37- 47 years participated in this study. The postural stability test on the Biodex balance system was used to characterize postural control along the anterior-posterior (AP) and mediolateral (ML) directions in eyes open and eyes closed conditions and maximal voluntary contraction (MVC) of knee extensors and flexors were measured before and after the high-intensity exhausting exercise protocol on the ergometer bike to confirm the presence of fatigue. Results indicated that the obese group demonstrated significantly greater body sway, in all indices (ML, AP, overall) compared with the normal weight group (eyes open). However, when visual feedback was eliminated, fatigue impaired the balance in the overall and AP indicators in both groups; ML sway was higher only in the obese group after exerting the fatigue in the eyes closed condition. Also, maximal voluntary contraction of knee extensors was impaired in the fatigued normal group but, there was no significant impairment in knee flexors MVC in both group. According to the findings, peripheral fatigue was associated with altered postural control in upright standing when eyes were closed, and that mechanoreceptors of the feet may be less able to estimate the position of the body COM over the base of support in the loss of visual feedback. This suggests that the overall capability of the postural control system during upright standing especially in the ML direction could be lower due to fatigue in obese individuals and could be a predictor of future falls.

Keywords: maximal voluntary contraction, obesity, peripheral fatigue, postural control, visual feedback

Procedia PDF Downloads 89
12284 Improving Law Enforcement Strategies Through Geographic Information Systems: A Spatio-Temporal Analysis of Antisocial Activities in Móstoles (2022)

Authors: Daniel Suarez Alonso

Abstract:

This study has tried to focus on the alternatives offered to police institutions by the implementation of Geographic Information systems. Providing operational police commanders with effective and efficient tools, providing analytical capacity to reduce criminal opportunities, must be a priority. Given the intimate connection of crimes and infractions to the environment, law enforcement institutions must respond proactively to changing circumstances of anti-norm behaviors. To this end, it has been intended to analyze the antisocial spatial distribution of the city of Móstoles, trying to identify those spatiotemporal patterns that occur to anticipate their commission through the planning of dynamic preventive strategies. The application of GIS offers alternative analytical approaches to the different problems that underlie the development of life in society, focusing resources on those places with the highest concentration of incidents.

Keywords: data analysis, police organizations, police prevention, geographic information systems

Procedia PDF Downloads 41
12283 Evaluating Performance of an Anomaly Detection Module with Artificial Neural Network Implementation

Authors: Edward Guillén, Jhordany Rodriguez, Rafael Páez

Abstract:

Anomaly detection techniques have been focused on two main components: data extraction and selection and the second one is the analysis performed over the obtained data. The goal of this paper is to analyze the influence that each of these components has over the system performance by evaluating detection over network scenarios with different setups. The independent variables are as follows: the number of system inputs, the way the inputs are codified and the complexity of the analysis techniques. For the analysis, some approaches of artificial neural networks are implemented with different number of layers. The obtained results show the influence that each of these variables has in the system performance.

Keywords: network intrusion detection, machine learning, artificial neural network, anomaly detection module

Procedia PDF Downloads 336
12282 A Strategic Performance Control System for Municipal Organization

Authors: Emin Gundogar, Aysegul Yilmaz

Abstract:

Strategic performance control is a significant procedure in management. There are various methods to improve this procedure. This study introduces an information system that is developed to score performance for municipal management. The application of the system is clarified by exemplifying municipal processes.

Keywords: management information system, municipal management, performance control

Procedia PDF Downloads 468
12281 Effectiveness of Centromedullary Fixation by Metaizeau Technique in Challenging Pediatric Fractures

Authors: Mohammad Arshad Ikram

Abstract:

We report three cases of challenging fractures in children treated by intramedullary fixation using the Metaizeau method and achieved anatomical reduction with excellent clinical results. Jean-Paul Metaizeau described the centromedullary fixation for the radial neck in 1980 using K-wires Radial neck fractures are uncommon in children. Treatment of severely displaced fractures is always challenging. Closed reduction techniques are more popular as compared to open reduction due to the low risk of complications. Metaizeau technique of closed reduction with centromedullary pinning is a commonly preferred method of treatment. We present two cases with a severely displaced radial neck fracture, treated by this method and achieved sound union; anatomical position of the radial head and full function were observed two months after surgery. Proximal humerus fractures are another uncommon injury in children accounting for less than 5% of all pediatric fractures. Most of these injuries occur through the growth plate because of its relative weakness. Salter-Harris type I is commonly seen in the younger age group, whereas type II & III occurs in older children and adolescents. In contrast to adults, traumatic glenohumeral dislocation is an infrequently observed condition among children. A combination of proximal humerus fracture and glenohumeral dislocation is extremely rare and occurs in less than 2% of the pediatric population. The management of this injury is always challenging. Treatment ranged from closed reduction with and without internal fixation and open reduction with internal fixation. The children who had closed reduction with centromedullary fixation by the Metaizeau method showed excellent results with the return of full movements at the shoulder in a short time without any complication. We present the case of a child with anterior dislocation of the shoulder associated with a complete displaced proximal humerus metaphyseal fracture. The fracture was managed by closed reduction and then fixation by two centromedullary K-wires using the Metaizeau method, achieving the anatomical reduction of the fracture and dislocation. This method of treatment enables us to achieve excellent radiological and clinical results in a short time.

Keywords: glenohumeral, Metaizeau method, pediatric fractures, radial neck

Procedia PDF Downloads 100
12280 Recognition of Cursive Arabic Handwritten Text Using Embedded Training Based on Hidden Markov Models (HMMs)

Authors: Rabi Mouhcine, Amrouch Mustapha, Mahani Zouhir, Mammass Driss

Abstract:

In this paper, we present a system for offline recognition cursive Arabic handwritten text based on Hidden Markov Models (HMMs). The system is analytical without explicit segmentation used embedded training to perform and enhance the character models. Extraction features preceded by baseline estimation are statistical and geometric to integrate both the peculiarities of the text and the pixel distribution characteristics in the word image. These features are modelled using hidden Markov models and trained by embedded training. The experiments on images of the benchmark IFN/ENIT database show that the proposed system improves recognition.

Keywords: recognition, handwriting, Arabic text, HMMs, embedded training

Procedia PDF Downloads 347
12279 Cigarette Smoke Detection Based on YOLOV3

Authors: Wei Li, Tuo Yang

Abstract:

In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes.

Keywords: deep learning, computer vision, cigarette smoke detection, YOLOV3, color feature extraction

Procedia PDF Downloads 80
12278 The Role of Social Media in Growing Small and Medium Enterprises: An Empirical Study in Jordan

Authors: Hanady Al-Zagheer

Abstract:

The purpose of this paper research is to introduce the role of the social media (face book) in growing small and medium enterprises in Jordan, Today’s developments of information technologies are dazzling. Using information technologies results in having advantages in competition, decreasing costs, gaining time, and getting and sharing information. Now it is possible to state that there are different types of usage within the information technologies. Small and medium enterprises have been grown rapidly in recent years and continue to grow. Jordanian females have played a large role in the growth of entrepreneurship and have made an impact on household economics. Virtual storefronts have allowed these women to balance roles assigned by tradition and culture while becoming successful providers. If you have a small business with a limited public relations and advertising budget, Facebook can be a cost effective way to promote your services because opening an account is free. However, this can work against you if you do not maintain the page. A Face book page without frequent updates can destroy your brand value and image. According to a 2009 Computerworld article by Lisa Hoover, having a Facebook page that looks abandoned is worse than having no page at all. You might need to hire someone or pay an employee to update your business’s Facebook page.

Keywords: social media, social media small, medium enterprises, Jordan

Procedia PDF Downloads 320
12277 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 24
12276 Integration of the Battery Passport into the eFTI Platform to Improve Digital Data Exchange in the Context of Battery Transport

Authors: Max Plotnikov, Arkadius Schier

Abstract:

To counteract climate change, the European Commission adopted the European Green Deal (EDG) in 2019. Some of the main objectives of the EDG are climate neutrality by 2050, decarbonization, sustainable mobility, and the shift from a linear economy to a circular economy in the European Union. The mobility turnaround envisages, among other things, the switch from classic internal combustion vehicles to electromobility. The aforementioned goals are therefore accompanied by increased demand for lithium-ion batteries (LIBs) and the associated logistics. However, this inevitably gives rise to challenges that need to be addressed. Depending on whether the LIB is transported by road, rail, air, or sea, there are different regulatory frameworks in the European Union that relevant players in the value chain must adhere to. LIBs are classified as Dangerous Goods Class 9, and against this backdrop, there are various restrictions that need to be adhered to when transporting them for various actors. Currently, the exchange of information in the value chain between the various actors is almost entirely paper-based. Especially in the transport of dangerous goods, this often leads to a delay in the transport or to incorrect data. The exchange of information with the authorities is particularly essential in this context. A solution for the digital exchange of information is currently being developed. Electronic freight transport information (eFTI) enables fast and secure exchange of information between the players in the freight transport process. This concept is to be used within the supply chain from 2025. Another initiative that is expected to improve the monitoring of LIB in this context, among other things, is the battery pass. In July 2023, the latest battery regulation was adopted in the Official Journal of the European Union. This battery pass gives different actors static as well as dynamic information about the batteries depending on their access rights. This includes master data such as battery weight or battery category or information on the state of health or the number of negative events that the battery has experienced. The integration of the battery pass with the eFTI platform will be investigated for synergy effects in favor of the actors for battery transport.

Keywords: battery logistics, battery passport, data sharing, eFTI, sustainability

Procedia PDF Downloads 73
12275 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape

Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin

Abstract:

It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.

Keywords: daily light integral, plant design, urban open space

Procedia PDF Downloads 504
12274 Dependence of Autoignition Delay Period on Equivalence Ratio for i-Octane, Primary Reference Fuel

Authors: Sunil Verma

Abstract:

In today’s world non-renewable sources are depleting quickly, so there is a need to produce efficient and unconventional engines to minimize the use of fuel. Also, there are many fatal accidents happening every year during extraction, distillation, transportation and storage of fuel. Reason for explosions of gaseous fuel is unwanted autoignition. Autoignition characterstics of fuel are mandatory to study to build efficient engines and to avoid accidents. This report is concerned with study of autoignition delay characteristics of iso-octane by using rapid compression machine. The paper clearly explains the dependence of ignition delay characteristics on variation of equivalence ratios from lean to rich mixtures. The equivalence ratio is varied from 0.3 to 1.2.

Keywords: autoignition, iso-octane, combustion, rapid compression machine, equivalence ratio, ignition delay

Procedia PDF Downloads 441
12273 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 250
12272 A Cost Effective Approach to Develop Mid-Size Enterprise Software Adopted the Waterfall Model

Authors: Mohammad Nehal Hasnine, Md Kamrul Hasan Chayon, Md Mobasswer Rahman

Abstract:

Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.

Keywords: end-user application development, enterprise software design, information resource management, usability

Procedia PDF Downloads 434
12271 Building Information Modelling Implementation in the Lifecycle of Sustainable Buildings

Authors: Scarlet Alejandra Romano, Joni Kareco

Abstract:

The three pillars of sustainability (social, economic and environmental) are relevant concepts to the Architecture, Engineering, and Construction (AEC) industry because of the increase of international agreements and guidelines related to this topic during the last years. Considering these three pillars, the AEC industry faces important challenges, for instance, to decrease the carbon emissions (environmental challenge), design sustainable spaces for people (social challenge), and improve the technology of this field to reduce costs and environmental problems (economic and environmental challenge). One alternative to overcome these challenges is Building Information Modelling program (BIM) because according to several authors, this technology improves the performance of the sustainable buildings in all their lifecycle phases. The main objective of this paper is to explore and analyse the current advantages and disadvantages of the BIM implementation in the life-cycle of sustainable buildings considering the three pillars of sustainability as analysis parameters. The methodology established to achieve this objective is exploratory-descriptive with the literature review technique. The partial results illustrate that despite the BIM disadvantages and the lack of information about its social sustainability advantages, this software represents a significant opportunity to improve the three sustainable pillars of the sustainable buildings.

Keywords: building information modelling, building lifecycle analysis, sustainability, sustainable buildings

Procedia PDF Downloads 181
12270 CMOS Solid-State Nanopore DNA System-Level Sequencing Techniques Enhancement

Authors: Syed Islam, Yiyun Huang, Sebastian Magierowski, Ebrahim Ghafar-Zadeh

Abstract:

This paper presents system level CMOS solid-state nanopore techniques enhancement for speedup next generation molecular recording and high throughput channels. This discussion also considers optimum number of base-pair (bp) measurements through channel as an important role to enhance potential read accuracy. Effective power consumption estimation offered suitable rangeof multi-channel configuration. Nanopore bp extraction model in statistical method could contribute higher read accuracy with longer read-length (200 < read-length). Nanopore ionic current switching with Time Multiplexing (TM) based multichannel readout system contributed hardware savings.

Keywords: DNA, nanopore, amplifier, ADC, multichannel

Procedia PDF Downloads 450
12269 A Comparative Study between Different Techniques of Off-Page and On-Page Search Engine Optimization

Authors: Ahmed Ishtiaq, Maeeda Khalid, Umair Sajjad

Abstract:

In the fast-moving world, information is the key to success. If information is easily available, then it makes work easy. The Internet is the biggest collection and source of information nowadays, and with every single day, the data on internet increases, and it becomes difficult to find required data. Everyone wants to make his/her website at the top of search results. This can be possible when you have applied some techniques of SEO inside your application or outside your application, which are two types of SEO, onsite and offsite SEO. SEO is an abbreviation of Search Engine Optimization, and it is a set of techniques, methods to increase users of a website on World Wide Web or to rank up your website in search engine indexing. In this paper, we have compared different techniques of Onpage and Offpage SEO, and we have suggested many things that should be changed inside webpage, outside web page and mentioned some most powerful and search engine considerable elements and techniques in both types of SEO in order to gain high ranking on Search Engine.

Keywords: auto-suggestion, search engine optimization, SEO, query, web mining, web crawler

Procedia PDF Downloads 145
12268 Automatic Detection of Suicidal Behaviors Using an RGB-D Camera: Azure Kinect

Authors: Maha Jazouli

Abstract:

Suicide is one of the most important causes of death in the prison environment, both in Canada and internationally. Rates of attempts of suicide and self-harm have been on the rise in recent years, with hangings being the most frequent method resorted to. The objective of this article is to propose a method to automatically detect in real time suicidal behaviors. We present a gesture recognition system that consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using machine learning algorithms (MLA). Our proposed system gives us satisfactory results. This smart video surveillance system can help assist staff responsible for the safety and health of inmates by alerting them when suicidal behavior is detected, which helps reduce mortality rates and save lives.

Keywords: suicide detection, Kinect azure, RGB-D camera, SVM, machine learning, gesture recognition

Procedia PDF Downloads 181
12267 User Selections on Social Network Applications

Authors: C. C. Liang

Abstract:

MSN used to be the most popular application for communicating among social networks, but Facebook chat is now the most popular. Facebook and MSN have similar characteristics, including usefulness, ease-of-use, and a similar function, which is the exchanging of information with friends. Facebook outperforms MSN in both of these areas. However, the adoption of Facebook and abandonment of MSN have occurred for other reasons. Functions can be improved, but users’ willingness to use does not just depend on functionality. Flow status has been established to be crucial to users’ adoption of cyber applications and to affects users’ adoption of software applications. If users experience flow in using software application, they will enjoy using it frequently, and even change their preferred application from an old to this new one. However, no investigation has examined choice behavior related to switching from Facebook to MSN based on a consideration of flow experiences and functions. This investigation discusses the flow experiences and functions of social-networking applications. Flow experience is found to affect perceived ease of use and perceived usefulness; perceived ease of use influences information ex-change with friends, and perceived usefulness; information exchange influences perceived usefulness, but information exchange has no effect on flow experience.

Keywords: consumer behavior, social media, technology acceptance model, flow experience

Procedia PDF Downloads 352
12266 Impact of Water Storage Structures on Groundwater Recharge in Jeloula Basin, Central Tunisia

Authors: I. Farid, K. Zouari

Abstract:

An attempt has been made to examine the effect of water storage structures on groundwater recharge in a semi-arid agroclimatic setting in Jeloula Basin (Central Tunisia). In this area, surface water in rivers is seasonal, and therefore groundwater is the perennial source of water supply for domestic and agricultural purposes. Three pumped storage water power plants (PSWPP) have been built to increase the overall water availability in the basin and support agricultural livelihoods of rural smallholders. The scale and geographical dispersion of these multiple lakes restrict the understanding of these coupled human-water systems and the identification of adequate strategies to support riparian farmers. In the present review, hydrochemistry and isotopic tools were combined to get an insight into the processes controlling mineralization and recharge conditions in the investigated aquifer system. This study showed a slight increase in the groundwater level, especially after the artificial recharge operations and a decline when the water volume moves down during drought periods. Chemical data indicate that the main sources of salinity in the waters are related to water-rock interactions. Data inferred from stable isotopes in groundwater samples indicated recharge with modern rainfall. The investigated surface water samples collected from the PSWPP are affected by a significant evaporation and reveal large seasonal variations, which could be controlled by the water volume changes in the open surface reservoirs and the meteorological conditions during evaporation, condensation, and precipitation. The geochemical information is comparable to the isotopic results and illustrates that the chemical and isotopic signatures of reservoir waters differ clearly from those of groundwaters. These data confirm that the contribution of the artificial recharge operations from the PSWPP is very limited.

Keywords: Jeloula basin, recharge, hydrochemistry, isotopes

Procedia PDF Downloads 146
12265 Unsupervised Assistive and Adaptative Intelligent Agent in Smart Enviroment

Authors: Sebastião Pais, João Casal, Ricardo Ponciano, Sérgio Lorenço

Abstract:

The adaptation paradigm is a basic defining feature for pervasive computing systems. Adaptation systems must work efficiently in a smart environment while providing suitable information relevant to the user system interaction. The key objective is to deduce the information needed information changes. Therefore relying on fixed operational models would be inappropriate. This paper presents a study on developing an Intelligent Personal Assistant to assist the user in interacting with their Smart Environment. We propose an Unsupervised and Language-Independent Adaptation through Intelligent Speech Interface and a set of methods of Acquiring Knowledge, namely Semantic Similarity and Unsupervised Learning.

Keywords: intelligent personal assistants, intelligent speech interface, unsupervised learning, language-independent, knowledge acquisition, association measures, symmetric word similarities, attributional word similarities

Procedia PDF Downloads 552