Search results for: and time domain reflectometry technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23660

Search results for: and time domain reflectometry technique

23240 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 54
23239 Analysis of Factors Influencing the Response Time of an Aspirating Gaseous Agent Concentration Detection Method

Authors: Yu Guan, Song Lu, Wei Yuan, Heping Zhang

Abstract:

Gas fire extinguishing system is widely used due to its cleanliness and efficiency, and since its spray will be affected by many factors such as convection and obstacles in jetting region, so in order to evaluate its effectiveness, detecting concentration distribution in the jetting area is indispensable, which is commonly achieved by aspirating concentration detection technique. During the concentration measurement, the response time of detector is a very important parameter, especially for those fire-extinguishing systems with rapid gas dispersion. Long response time will not only underestimate its concentration but also prolong the change of concentration with time. Therefore it is necessary to analyze the factors influencing the response time. In the paper, an aspirating concentration detection method was introduced, which is achieved by using a small critical nozzle and a laminar flowmeter, and because of the response time is mainly related to the gas transport process from sampling site to the sensor, the effects of exhaust pipe size, gas flow rate, and gas concentration on its response time were analyzed. During the research, Bromotrifluoromethane (CBrF₃) was used. The effect of the sampling tube was investigated with different length of 1, 2, 3, 4 and 5 m (5mm in pipe diameter) and different pipe diameter of 3, 4, 5, 6 and 8 mm (3m in length). The effect of gas flow rate was analyzed by changing the throat diameter of the critical nozzle with 0.5, 0.682, 0.75, 0.8, 0.84 and 0.88 mm. The effect of gas concentration on response time was studied with the concentration range of 0-25%. The result showed that the response time increased with the increase of both the length and diameter of the sampling pipe, and the effect of length on response time was linear, but for the effect of diameter, it was exponential. It was also found that as the throat diameter of critical nozzle increased, the response time reduced a lot, in other words, gas flow rate has a great influence on response time. For the effect of gas concentration, the response time increased with the increase of the CBrF₃ concentration, and the slope of the curve was reduced.

Keywords: aspirating concentration detection, fire extinguishing, gaseous agent, response time

Procedia PDF Downloads 249
23238 Adsorbed Probe Molecules on Surface for Analyzing the Properties of Cu/SnO2 Supported Catalysts

Authors: Neha Thakur, Pravin S. More

Abstract:

The interaction of CO, H2 and LPG with Cu-dosed SnO2 catalysts was studied by means of Fourier transform infrared spectroscopy (FTIR). With increasing Cu loading, pronounced and progressive red shifts of the C–O stretching frequency associated with molecular CO adsorbed on the Cu/SnO2 component were observed. This decrease in n(CO) correlates with enhancement of CO dissociation at higher temperatures on Cu promoted SnO2 catalysts under conditions, where clean Cu is almost ineffective. In the conclusion, the capability of our technique is discussed, and a technique for enhancing the sensitivity in our technique is proposed.

Keywords: FTIR, spectroscopic, dissociation, n(CO)

Procedia PDF Downloads 281
23237 Fuzzy Semantic Annotation of Web Resources

Authors: Sahar Maâlej Dammak, Anis Jedidi, Rafik Bouaziz

Abstract:

With the great mass of pages managed through the world, and especially with the advent of the Web, their manual annotation is impossible. We focus, in this paper, on the semiautomatic annotation of the web pages. We propose an approach and a framework for semantic annotation of web pages entitled “Querying Web”. Our solution is an enhancement of the first result of annotation done by the “Semantic Radar” Plug-in on the web resources, by annotations using an enriched domain ontology. The concepts of the result of Semantic Radar may be connected to several terms of the ontology, but connections may be uncertain. We represent annotations as possibility distributions. We use the hierarchy defined in the ontology to compute degrees of possibilities. We want to achieve an automation of the fuzzy semantic annotation of web resources.

Keywords: fuzzy semantic annotation, semantic web, domain ontologies, querying web

Procedia PDF Downloads 349
23236 X-Ray Diffraction Technique as a Means for Degradation Assessment of Welded Joints

Authors: Jaroslav Fiala, Jaroslav Kaiser, Pavel Zlabek, Vaclav Mentl

Abstract:

The X-ray diffraction technique was recognized as a useful tool for the assessment of material degradation degree after a long-time service. In many industrial applications materials are subjected to degradation of mechanical properties as a result of real service conditions. The assessment of the remnant lifetime of components and structures is commonly based on correlated procedures including numerous destructive, non-destructive and mathematical techniques that should guarantee reasonable precise assessment of the current damage extent of materials in question and the remnant lifetime assessment. This paper summarizes results of an experimental programme concentrated on mechanical properties degradation of welded components. Steel an Al-alloy test specimens of base metal, containing welds and simple weldments were fatigue loaded at room temperature to obtain Woehler S-N curve. X-ray diffraction technique was applied to assess the degradation degree of material as a result of cyclic loading.

Keywords: fatigue loading, material degradation, steels, AL-alloys, X-ray diffraction

Procedia PDF Downloads 416
23235 Solar Wind Turbulence and the Role of Circularly Polarized Dispersive Alfvén Wave

Authors: Swati Sharma, R. P. Sharma

Abstract:

We intend to study the nonlinear evolution of the parallel propagating finite frequency Alfvén wave (also called Dispersive Alfvén wave/Hall MHD wave) propagating in the solar wind regime of the solar region when a perpendicularly propagating magnetosonic wave is present in the background. The finite frequency Alfvén wave behaves differently from the usual non-dispersive behavior of the Alfvén wave. To study the nonlinear processes (such as filamentation) taking place in the solar regions such as solar wind, the dynamical equation of both the waves are derived. Numerical simulation involving finite difference method for the time domain and pseudo spectral method for the spatial domain is then performed to analyze the transient evolution of these waves. The power spectra of the Dispersive Alfvén wave is also investigated. The power spectra shows the distribution of the magnetic field intensity of the Dispersive Alfvén wave over different wave numbers. For DAW the spectra shows a steepening for scales larger than the proton inertial length. This means that the wave energy gets transferred to the solar wind particles as the wave reaches higher wave numbers. This steepening of the power spectra can be explained on account of the finite frequency of the Alfvén wave. The obtained results are consistent with the observations made by CLUSTER spacecraft.

Keywords: solar wind, turbulence, dispersive alfven wave

Procedia PDF Downloads 582
23234 A Study of Lean Principles Implementation in the Libyan Healthcare and Industry Sectors

Authors: Nasser M. Amaitik, Ngwan F. Elsagzli

Abstract:

The Lean technique is very important in the service and industrial fields. It is defined as an effective tool to eliminate the wastes. In lean the wastes are defined as anything which does not add value to the end product. There are wastes that can be avoided, but some are unavoidable to many reasons. The present study aims to apply the principles of lean in two different sectors, healthcare, and industry. Two case studies have been selected to apply the experimental work. The first case was Al-Jalaa Hospital while the second case study was the Technical Company of Aluminum Sections in Benghazi, Libya. In both case studies the Value Stream Map (VSM) of the current state has been constructed. The proposed plans have been implemented by merging or eliminating procedures or processes. The results obtained from both case studies showed improvement in capacity, idle time and utilized time.

Keywords: healthcare service delivery, idle time, lean principles, utilized time, value stream mapping, wastes

Procedia PDF Downloads 267
23233 Applying Dictogloss Technique to Improve Auditory Learners’ Writing Skills in Second Language Learning

Authors: Aji Budi Rinekso

Abstract:

There are some common problems that are often faced by students in writing. The problems are related to macro and micro skills of writing, such as incorrect spellings, inappropriate diction, grammatical errors, random ideas, and irrelevant supporting sentences. Therefore, it is needed a teaching technique that can solve those problems. Dictogloss technique is a teaching technique that involves listening practices. So, it is a suitable teaching technique for students with auditory learning style. Dictogloss technique comprises of four basic steps; (1) warm up, (2) dictation, (3) reconstruction and (4) analysis and correction. Warm up is when students find out about topics and do some preparatory vocabulary works. Then, dictation is when the students listen to texts read at normal speed by a teacher. The text is read by the teacher twice where at the first reading the students only listen to the teacher and at the second reading the students listen to the teacher again and take notes. Next, reconstruction is when the students discuss the information from the text read by the teacher and start to write a text. Lastly, analysis and correction are when the students check their writings and revise them. Dictogloss offers some advantages in relation to the efforts of improving writing skills. Through the use of dictogloss technique, students can solve their problems both on macro skills and micro skills. Easier to generate ideas and better writing mechanics are the benefits of dictogloss.

Keywords: auditory learners, writing skills, dictogloss technique, second language learning

Procedia PDF Downloads 125
23232 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 110
23231 Transport and Mixing Phenomena Developed by Vortex Formation in Flow around Airfoil Using Lagrangian Coherent Structures

Authors: Riaz Ahmad, Jiazhong Zhang, Asma Farooqi

Abstract:

In this study, mass transport between separation bubbles and the flow around a two-dimensional airfoil are numerically investigated using Lagrangian Coherent Structures (LCSs). Finite Time Lyapunov Exponent (FTLE) technique is used for the computation to identify invariant manifolds and LCSs. Moreover, the Characteristic Base Split (CBS) scheme combined with dual time stepping technique is applied to simulate such transient flow at low Reynolds number. We then investigate the evolution of vortex structures during the transport process with the aid of LCSs. To explore the vortex formation at the surface of the airfoil, the dynamics of separatrix is also taken into account which is formed by the combination of stable-unstable manifolds. The Lagrangian analysis gives a detailed understanding of vortex dynamics and separation bubbles which plays a significant role to explore the performance of the unsteady flow generated by the airfoil. Transport process and flow separation phenomena are studied extensively to analyze the flow pattern by Lagrangian point of view.

Keywords: transport phenomena, CBS Method, vortex formation, Lagrangian Coherent Structures

Procedia PDF Downloads 120
23230 Low Power Glitch Free Dual Output Coarse Digitally Controlled Delay Lines

Authors: K. Shaji Mon, P. R. John Sreenidhi

Abstract:

In deep-submicrometer CMOS processes, time-domain resolution of a digital signal is becoming higher than voltage resolution of analog signals. This claim is nowadays pushing toward a new circuit design paradigm in which the traditional analog signal processing is expected to be progressively substituted by the processing of times in the digital domain. Within this novel paradigm, digitally controlled delay lines (DCDL) should play the role of digital-to-analog converters in traditional, analog-intensive, circuits. Digital delay locked loops are highly prevalent in integrated systems.The proposed paper addresses the glitches present in delay circuits along with area,power dissipation and signal integrity.The digitally controlled delay lines(DCDL) under study have been designed in a 90 nm CMOS technology 6 layer metal Copper Strained SiGe Low K Dielectric. Simulation and synthesis results show that the novel circuits exhibit no glitches for dual output coarse DCDL with less power dissipation and consumes less area compared to the glitch free NAND based DCDL.

Keywords: glitch free, NAND-based DCDL, CMOS, deep-submicrometer

Procedia PDF Downloads 234
23229 Mean Field Model Interaction for Computer and Communication Systems: Modeling and Analysis of Wireless Sensor Networks

Authors: Irina A. Gudkova, Yousra Demigha

Abstract:

Scientific research is moving more and more towards the study of complex systems in several areas of economics, biology physics, and computer science. In this paper, we will work on complex systems in communication networks, Wireless Sensor Networks (WSN) that are considered as stochastic systems composed of interacting entities. The current advancements of the sensing in computing and communication systems is an investment ground for research in several tracks. A detailed presentation was made for the WSN, their use, modeling, different problems that can occur in their application and some solutions. The main goal of this work reintroduces the idea of mean field method since it is a powerful technique to solve this type of models especially systems that evolve according to a Continuous Time Markov Chain (CTMC). Modeling of a CTMC has been focused; we obtained a large system of interacting Continuous Time Markov Chain with population entities. The main idea was to work on one entity and replace the others with an average or effective interaction. In this context to make the solution easier, we consider a wireless sensor network as a multi-body problem and we reduce it to one body problem. The method was applied to a system of WSN modeled as a Markovian queue showing the results of the used technique.

Keywords: Continuous-Time Markov Chain, Hidden Markov Chain, mean field method, Wireless sensor networks

Procedia PDF Downloads 141
23228 Decision Support System for Solving Multi-Objective Routing Problem

Authors: Ismail El Gayar, Ossama Ismail, Yousri El Gamal

Abstract:

This paper presented a technique to solve one of the transportation problems that faces us in real life which is the Bus Scheduling Problem. Most of the countries using buses in schools, companies and traveling offices as an example to transfer multiple passengers from many places to specific place and vice versa. This transferring process can cost time and money, so we build a decision support system that can solve this problem. In this paper, a genetic algorithm with the shortest path technique is used to generate a competitive solution to other well-known techniques. It also presents a comparison between our solution and other solutions for this problem.

Keywords: bus scheduling problem, decision support system, genetic algorithm, shortest path

Procedia PDF Downloads 383
23227 Evaluation of Total Antioxidant Activity (TAC) of Copper Oxide Decorated Reduced Graphene Oxide (CuO-rGO) at Different Stirring time

Authors: Aicha Bensouici, Assia Mili, Naouel Rdjem, Nacera Baali

Abstract:

Copper oxide decorated reduced graphene oxide (GO) was obtained successfully using two steps route synthesis was used. Firstly, graphene oxide was obtained using a modified Hummers method by excluding sodium nitrate from starting materials. After washing-centrifugation routine pristine GO was decorated by copper oxide using a refluxation technique at 120°C during 2h, and an equal amount of GO and copper acetate was used. Three CuO-rGO nanocomposite samples types were obtained at 30min, 24h, and 7 day stirring time. TAC results show dose dependent behavior of CuO-rGO and confirm no influence of stirring time on antioxidant properties, 30min is considered as an optimal stirring condition.

Keywords: copper oxide, reduced graphene oxide, TAC, GO

Procedia PDF Downloads 90
23226 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 143
23225 Simulation of Mid Infrared Supercontinuum Generation in Silicon Germanium Photonic Waveguides for Gas Spectroscopy

Authors: Proficiency Munsaka, Peter Baricholo, Erich Rohwer

Abstract:

Pulse evolutions along the 5 cm long, 6.0 ×4.2 μm² cross-section silicon germanium (SiGe) photonic waveguides were simulated and compared with experiments. Simulations were carried out by solving a generalized nonlinear Schrodinger equation (GNLSE) for an optical pulse evolution along the length of the SiGe photonic waveguides by the split-step Fourier method (SSFM). The solution obtained from the SSFM gave the pulse envelope in both time and spectral domain calculated at each distance step along the propagation direction. The SiGe photonic waveguides were pumped in an anomalous group velocity dispersion (GVD) regime using a 4.7 μm, 210 fs femtosecond laser to produce a significant supercontinuum (SC). The simulated propagation of ultrafast pulse along the SiGe photonic waveguides produced an SC covering the atmospheric window (2.5-8.5 μm) containing the molecular fingerprints for important gases. Thus, the mid-infrared supercontinuum generation in SiGe photonic waveguides system can be commercialized for gas spectroscopy for detecting gases that include CO₂, CH₄, H₂O, SO₂, SO₃, NO₂, H₂S, CO, and NO at trace level using absorption spectroscopy technique. The simulated profile evolutions are spectrally and temporally similar to those obtained by other researchers. Obtained evolution profiles are characterized by pulse compression, Soliton fission, dispersive wave generation, stimulated Raman Scattering, and Four Wave mixing.

Keywords: silicon germanium photonic waveguide, supercontinuum generation, spectroscopy, mid infrared

Procedia PDF Downloads 107
23224 Genetically Encoded Tool with Time-Resolved Fluorescence Readout for the Calcium Concentration Measurement

Authors: Tatiana R. Simonyan, Elena A. Protasova, Anastasia V. Mamontova, Eugene G. Maksimov, Konstantin A. Lukyanov, Alexey M. Bogdanov

Abstract:

Here, we describe two variants of the calcium indicators based on the GCaMP sensitive core and BrUSLEE fluorescent protein (GCaMP-BrUSLEE and GCaMP-BrUSLEE-145). In contrast to the conventional GCaMP6-family indicators, these fluorophores are characterized by the well-marked responsiveness of their fluorescence decay kinetics to external calcium concentration both in vitro and in cellulo. Specifically, we show that the purified GCaMP-BrUSLEE and GCaMP-BrUSLEE-145 exhibit three-component fluorescence decay kinetics, with the amplitude-normalized lifetime component (t3*A3) of GCaMP-BrUSLEE-145 changing four-fold (500-2000 a.u.) in response to a Ca²⁺ concentration shift in the range of 0—350 nM. Time-resolved fluorescence microscopy of live cells displays the two-fold change of the GCaMP-BrUSLEE-145 mean lifetime upon histamine-stimulated calcium release. The aforementioned Ca²⁺-dependence calls considering the GCaMP-BrUSLEE-145 as a prospective Ca²⁺-indicator with the signal read-out in the time domain.

Keywords: calcium imaging, fluorescence lifetime imaging microscopy, fluorescent proteins, genetically encoded indicators

Procedia PDF Downloads 129
23223 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis

Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali

Abstract:

Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.

Keywords: distribution, thelaria, LAMP, primer sequences, PCR

Procedia PDF Downloads 85
23222 Impact of Infrastructural Development on Socio-Economic Growth: An Empirical Investigation in India

Authors: Jonardan Koner

Abstract:

The study attempts to find out the impact of infrastructural investment on state economic growth in India. It further tries to determine the magnitude of the impact of infrastructural investment on economic indicator, i.e., per-capita income (PCI) in Indian States. The study uses panel regression technique to measure the impact of infrastructural investment on per-capita income (PCI) in Indian States. Panel regression technique helps incorporate both the cross-section and time-series aspects of the dataset. In order to analyze the difference in impact of the explanatory variables on the explained variables across states, the study uses Fixed Effect Panel Regression Model. The conclusions of the study are that infrastructural investment has a desirable impact on economic development and that the impact is different for different states in India. We analyze time series data (annual frequency) ranging from 1991 to 2010. The study reveals that the infrastructural investment significantly explains the variation of economic indicators.

Keywords: infrastructural investment, multiple regression, panel regression techniques, economic development, fixed effect dummy variable model

Procedia PDF Downloads 352
23221 Assessing Project Performance through Work Sampling and Earned Value Analysis

Authors: Shobha Ramalingam

Abstract:

The majority of the infrastructure projects are affected by time overrun, resulting in project delays and subsequently cost overruns. Time overrun may vary from a few months to as high as five or more years, placing the project viability at risk. One of the probable reasons noted in the literature for this outcome in projects is due to poor productivity. Researchers contend that productivity in construction has only marginally increased over the years. While studies in the literature have extensively focused on time and cost parameters in projects, there are limited studies that integrate time and cost with productivity to assess project performance. To this end, a study was conducted to understand the project delay factors concerning cost, time and productivity. A case-study approach was adopted to collect rich data from a nuclear power plant project site for two months through observation, interviews and document review. The data were analyzed using three different approaches for a comprehensive understanding. Foremost, a root-cause analysis was performed on the data using Ishikawa’s fish-bone diagram technique to identify the various factors impacting the delay concerning time. Based on it, a questionnaire was designed and circulated to concerned executives, including project engineers and contractors to determine the frequency of occurrence of the delay, which was then compiled and presented to the management for a possible solution to mitigate. Second, a productivity analysis was performed on select activities, including rebar bending and concreting through a time-motion study to analyze product performance. Third, data on cost of construction for three years allowed analyzing the cost performance using earned value management technique. All three techniques allowed to systematically and comprehensively identify the key factors that deter project performance and productivity loss in the construction of the nuclear power plant project. The findings showed that improper planning and coordination between multiple trades, concurrent operations, improper workforce and material management, fatigue due to overtime were some of the key factors that led to delays and poor productivity. The findings are expected to act as a stepping stone for further research and have implications for practitioners.

Keywords: earned value analysis, time performance, project costs, project delays, construction productivity

Procedia PDF Downloads 78
23220 Evaluation of the Role of Circulating Long Non-Coding RNA H19 as a Promising Biomarker in Plasma of Patients with Gastric Cancer

Authors: Doaa Hashad, Amany Elbanna, Abeer Ibrahim, Gihan Khedr

Abstract:

Background: H19 is one of the long non coding RNAs (LncRNA) that is related to the progression of many diseases including cancers. This work was carried out to study the level of the long non-coding RNA; H119, in plasma of patients with gastric cancer (GC) and to assess its significance in their clinical management. Methods: A total of sixty-two participants were enrolled in the present study. The first group included thirty-two GC patients, while the second group was formed of thirty age and sex matched healthy volunteers serving as a control group. Plasma samples were used to assess H19 gene expression using real time quantitative PCR technique. Results: H19 expression was up-regulated in GC patients with positive correlation to TNM cancer stages. Conclusions: Up-regulation of H19 is closely associated with gastric cancer and correlates well with tumor staging. Convenient, efficient quantification of H19 in plasma using real time PCR technique implements its role as a potential noninvasive prognostic biomarker in gastric cancer, that predicts patient’s outcome and most importantly as a novel target in gastric cancer treatment with better performance achieved on using both CEA and H19 simultaneously.

Keywords: biomarker, gastric, cancer, LncRNA

Procedia PDF Downloads 296
23219 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models

Authors: Bipasha Sen, Aditya Agarwal

Abstract:

Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.

Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition

Procedia PDF Downloads 98
23218 Critical Path Segments Method for Scheduling Technique

Authors: Sherif M. Hafez, Remon F. Aziz, May S. A. Elalim

Abstract:

Project managers today rely on scheduling tools based on the Critical Path Method (CPM) to determine the overall project duration and the activities’ float times which lead to greater efficiency in planning and control of projects. CPM was useful for scheduling construction projects, but researchers had highlighted a number of serious drawbacks that limit its use as a decision support tool and lacks the ability to clearly record and represent detailed information. This paper discusses the drawbacks of CPM as a scheduling technique and presents a modified critical path method (CPM) model which is called critical path segments (CPS). The CPS scheduling mechanism addresses the problems of CPM in three ways: decomposing the activity duration of separated but connected time segments; all relationships among activities are converted into finish–to–start relationship; and analysis and calculations are made with forward path. Sample cases are included to illustrate the shortages in CPM, CPS full analysis and calculations are explained in details, and how schedules can be handled better with the CPS technique.

Keywords: construction management, scheduling, critical path method, critical path segments, forward pass, float, project control

Procedia PDF Downloads 338
23217 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface

Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves

Abstract:

In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.

Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment

Procedia PDF Downloads 109
23216 Indoor Robot Positioning with Precise Correlation Computations over Walsh-Coded Lightwave Signal Sequences

Authors: Jen-Fa Huang, Yu-Wei Chiu, Jhe-Ren Cheng

Abstract:

Visible light communication (VLC) technique has become useful method via LED light blinking. Several issues on indoor mobile robot positioning with LED blinking are examined in the paper. In the transmitter, we control the transceivers blinking message. Orthogonal Walsh codes are adopted for such purpose on auto-correlation function (ACF) to detect signal sequences. In the robot receiver, we set the frame of time by 1 ns passing signal from the transceiver to the mobile robot. After going through many periods of time detecting the peak value of ACF in the mobile robot. Moreover, the transceiver transmits signal again immediately. By capturing three times of peak value, we can know the time difference of arrival (TDOA) between two peak value intervals and finally analyze the accuracy of the robot position.

Keywords: Visible Light Communication, Auto-Correlation Function (ACF), peak value of ACF, Time difference of Arrival (TDOA)

Procedia PDF Downloads 298
23215 Combined Effect of Heat Stimulation and Delayed Addition of Superplasticizer with Slag on Fresh and Hardened Property of Mortar

Authors: Faraidoon Rahmanzai, Mizuki Takigawa, Yu Bomura, Shigeyuki Date

Abstract:

To obtain the high quality and essential workability of mortar, different types of superplasticizers are used. The superplasticizers are the chemical admixture used in the mix to improve the fluidity of mortar. Many factors influenced the superplasticizer to disperse the cement particle in the mortar. Nature and amount of replaced cement by slag, mixing procedure, delayed addition time, and heat stimulation technique of superplasticizer cause the varied effect on the fluidity of the cementitious material. In this experiment, the superplasticizers were heated for 1 hour under 60 °C in a thermostatic chamber. Furthermore, the effect of delayed addition time of heat stimulated superplasticizers (SP) was also analyzed. This method was applied to two types of polycarboxylic acid based ether SP (precast type superplasticizer (SP2) and ready-mix type superplasticizer (SP1)) in combination with a partial replacement of normal Portland cement with blast furnace slag (BFS) with 30% w/c ratio. On the other hands, the fluidity, air content, fresh density, and compressive strength for 7 and 28 days were studied. The results indicate that the addition time and heat stimulation technique improved the flow and air content, decreased the density, and slightly decreased the compressive strength of mortar. Moreover, the slag improved the flow of mortar by increasing the amount of slag, and the effect of external temperature of SP on the flow of mortar was decreased. In comparison, the flow of mortar was improved on 5-minute delay for both kinds of SP, but SP1 has improved the flow in all conditions. Most importantly, the transition points in both types of SP appear to be the same, at about 5±1 min.  In addition, the optimum addition time of SP to mortar should be in this period.

Keywords: combined effect, delay addition, heat stimulation, flow of mortar

Procedia PDF Downloads 177
23214 Random Access in IoT Using Naïve Bayes Classification

Authors: Alhusein Almahjoub, Dongyu Qiu

Abstract:

This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.

Keywords: random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation

Procedia PDF Downloads 128
23213 Incorporating Multiple Supervised Learning Algorithms for Effective Intrusion Detection

Authors: Umar Albalawi, Sang C. Suh, Jinoh Kim

Abstract:

As internet continues to expand its usage with an enormous number of applications, cyber-threats have significantly increased accordingly. Thus, accurate detection of malicious traffic in a timely manner is a critical concern in today’s Internet for security. One approach for intrusion detection is to use Machine Learning (ML) techniques. Several methods based on ML algorithms have been introduced over the past years, but they are largely limited in terms of detection accuracy and/or time and space complexity to run. In this work, we present a novel method for intrusion detection that incorporates a set of supervised learning algorithms. The proposed technique provides high accuracy and outperforms existing techniques that simply utilizes a single learning method. In addition, our technique relies on partial flow information (rather than full information) for detection, and thus, it is light-weight and desirable for online operations with the property of early identification. With the mid-Atlantic CCDC intrusion dataset publicly available, we show that our proposed technique yields a high degree of detection rate over 99% with a very low false alarm rate (0.4%).

Keywords: intrusion detection, supervised learning, traffic classification, computer networks

Procedia PDF Downloads 330
23212 Kirchoff Type Equation Involving the p-Laplacian on the Sierpinski Gasket Using Nehari Manifold Technique

Authors: Abhilash Sahu, Amit Priyadarshi

Abstract:

In this paper, we will discuss the existence of weak solutions of the Kirchhoff type boundary value problem on the Sierpinski gasket. Where S denotes the Sierpinski gasket in R² and S₀ is the intrinsic boundary of the Sierpinski gasket. M: R → R is a positive function and h: S × R → R is a suitable function which is a part of our main equation. ∆p denotes the p-Laplacian, where p > 1. First of all, we will define a weak solution for our problem and then we will show the existence of at least two solutions for the above problem under suitable conditions. There is no well-known concept of a generalized derivative of a function on a fractal domain. Recently, the notion of differential operators such as the Laplacian and the p-Laplacian on fractal domains has been defined. We recall the result first then we will address the above problem. In view of literature, Laplacian and p-Laplacian equations are studied extensively on regular domains (open connected domains) in contrast to fractal domains. In fractal domains, people have studied Laplacian equations more than p-Laplacian probably because in that case, the corresponding function space is reflexive and many minimax theorems which work for regular domains is applicable there which is not the case for the p-Laplacian. This motivates us to study equations involving p-Laplacian on the Sierpinski gasket. Problems on fractal domains lead to nonlinear models such as reaction-diffusion equations on fractals, problems on elastic fractal media and fluid flow through fractal regions etc. We have studied the above p-Laplacian equations on the Sierpinski gasket using fibering map technique on the Nehari manifold. Many authors have studied the Laplacian and p-Laplacian equations on regular domains using this Nehari manifold technique. In general Euler functional associated with such a problem is Frechet or Gateaux differentiable. So, a critical point becomes a solution to the problem. Also, the function space they consider is reflexive and hence we can extract a weakly convergent subsequence from a bounded sequence. But in our case neither the Euler functional is differentiable nor the function space is known to be reflexive. Overcoming these issues we are still able to prove the existence of at least two solutions of the given equation.

Keywords: Euler functional, p-Laplacian, p-energy, Sierpinski gasket, weak solution

Procedia PDF Downloads 216
23211 Buzan Mind Mapping: An Efficient Technique for Note-Taking

Authors: T. K. Tee, M. N. A. Azman, S. Mohamed, M. Muhammad, M. M. Mohamad, J. Md Yunos, M. H. Yee, W. Othman

Abstract:

Buzan mind mapping is an efficient system of note-taking that makes revision a fun thing to do for students. Tony Buzan has been teaching children all over the world for the past thirty years and has proved that mind maps are the magic formula in the classroom for everyone. The purpose of this paper is to discuss the importance of Buzan mind mapping as a note-taking technique for the secondary school students. This paper also examines the mind mapping technique, advantages and disadvantages of hand-drawn mind maps. Samples of students’ mind maps were presented and discussed.

Keywords: Buzan mind mapping, note-taking technique, hand-drawn, mind maps

Procedia PDF Downloads 518