Search results for: Signal Processing
3598 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 4643597 A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection
Authors: Alena Semeradtova, Marcel Stofik, Lucie Mareckova, Petr Maly, Ondrej Stanek, Jan Maly
Abstract:
Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range.Keywords: high affinity binding molecules, human serum albumin, optical immunosensor, protein G, UV-photolitography
Procedia PDF Downloads 3683596 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals
Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya
Abstract:
The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses
Procedia PDF Downloads 3253595 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser
Authors: Guanqiao Wang, Hongyang Yu
Abstract:
There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.Keywords: indoor plastering robot, navigation, precise positioning, line laser, image processing
Procedia PDF Downloads 1483594 Vitrification and Devitrification of Chromium Containing Tannery Ash
Authors: Savvas Varitis, Panagiotis Kavouras, George Kaimakamis, Eleni Pavlidou, George Vourlias, Konstantinos Chrysafis, Philomela Komninou, Theodoros Karakostas
Abstract:
Tannery industry produces high quantities of chromium containing waste which also have high organic content. Processing of this waste is important since the organic content is above the disposal limits and the containing trivalent chromium could be potentially oxidized to hexavalent in the environment. This work aims to fabricate new vitreous and glass ceramic materials which could incorporate the tannery waste in stabilized form either for safe disposal or for the production of useful materials. Tannery waste was incinerated at 500oC in anoxic conditions so most of the organic content would be removed and the chromium remained trivalent. Glass forming agents SiO2, Na2O and CaO were mixed with the resulting ash in different proportions with decreasing ash content. Considering the low solubility of Cr in silicate melts, the mixtures were melted at 1400oC and/or 1500oC for 2h and then casted on a refractory steel plate. The resulting vitreous products were characterized by X-Ray Diffraction (XRD), Differential Thermal Analysis (DTA), Scanning and Transmission Electron Microscopy (SEM and TEM). XRD reveals the existence of Cr2O3 (eskolaite) crystallites embedded in a glassy amorphous matrix. Such crystallites are not formed under a certain proportion of the waste in the ash-vitrified material. Reduction of the ash proportion increases chromium content in the silicate matrix. From these glassy products, glass-ceramics were produced via different regimes of thermal treatment.Keywords: chromium containing tannery ash, glass ceramic materials, thermal processing, vitrification
Procedia PDF Downloads 3673593 A Nanoelectromechanical Tunable Oscillator Base on a High-Q Optical Cavity
Authors: Jianguo Huang, Hong Cai, Bin Dong, Jifang Tao, Aiqun Liu, Dim-Lee Kwong, Yuandong Gu
Abstract:
We developed a miniaturized tunable optomechanical oscillator based on the nanoelectromechanical systems (NEMS) technology, and its frequencies can be electrostatically tuned by as much as 10%. By taking both advantages of optical and electrical spring, the oscillator achieves a high tuning sensitivity without resorting to mechanical tension. In particular, the proposed high-Q optical cavity design greatly enhances the system sensitivity, making it extremely sensitive to the small motional signal.Keywords: nanoelectromechanical systems (NEMS), nanotechnology, optical force, oscillator
Procedia PDF Downloads 4973592 The Effect of High-Pressure Processing on the Inactivation of Saccharomyces cerevisiae in Different Concentration of Manuka Honey and Its Relation with ° Brix
Authors: Noor Akhmazillah Fauzi, Mohammed Mehdi Farid, Filipa V. Silva
Abstract:
The aim of this paper is to investigate if different concentration of Manuka honey (as a model food) has a major influence on the inactivation of Saccharomyces cerevisiae (as the testing microorganism) after subjecting it to HPP. Honey samples with different sugar concentrations (20, 30, 40, 50, 60 and 70 °Brix) were prepared aseptically using sterilized distilled water. No dilution of honey was made for the 80 °Brix sample. For the 0 °Brix sample (control), sterilized distilled water was used. Thermal treatment at 55 °C for 10 min (conventionally applied in honey pasteurisation in industry) was carried out for comparison purpose. S. cerevisiae cell numbers in honey samples were established before and after each HPP and thermal treatment. The number of surviving cells was determined after a proper dilution of the untreated and treated samples by the viable plate count method. S. cerevisiae cells, in different honey concentrations (0 to 80 °Brix), subjected to 600 MPa (at ambient temperature) showed an increasing resistance to inactivation with °Brix. A significant correlation (p < 0.05) between cell reduction and °Brix was found. Cell reduction in high pressure-treated samples varied linearly with °Brix (R2 > 0.9), confirming that the baroprotective effect of the food is due to sugar content. This study has practical implications in establishing efficient process design for commercial manufacturing of high sugar food products and on the potential use of HPP for such products.Keywords: high pressure processing, honey, Saccharomyces cerevisiae, °Brix
Procedia PDF Downloads 3533591 0.13-μm CMOS Vector Modulator for Wireless Backhaul System
Authors: J. S. Kim, N. P. Hong
Abstract:
In this paper, a CMOS vector modulator designed for wireless backhaul system based on 802.11ac is presented. A poly phase filter and sign select switches yield two orthogonal signal paths. Two variable gain amplifiers with strongly reduced phase shift of only ±5 ° are used to weight these paths. It has a phase control range of 360 ° and a gain range of -10 dB to 10 dB. The current drawn from a 1.2 V supply amounts 20.4 mA. Using a 0.13 mm technology, the chip die area amounts 1.47x0.75 mm².Keywords: CMOS, phase shifter, backhaul, 802.11ac
Procedia PDF Downloads 3863590 Coarse-Grained Computational Fluid Dynamics-Discrete Element Method Modelling of the Multiphase Flow in Hydrocyclones
Authors: Li Ji, Kaiwei Chu, Shibo Kuang, Aibing Yu
Abstract:
Hydrocyclones are widely used to classify particles by size in industries such as mineral processing and chemical processing. The particles to be handled usually have a broad range of size distributions and sometimes density distributions, which has to be properly considered, causing challenges in the modelling of hydrocyclone. The combined approach of Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) offers convenience to model particle size/density distribution. However, its direct application to hydrocyclones is computationally prohibitive because there are billions of particles involved. In this work, a CFD-DEM model with the concept of the coarse-grained (CG) model is developed to model the solid-fluid flow in a hydrocyclone. The DEM is used to model the motion of discrete particles by applying Newton’s laws of motion. Here, a particle assembly containing a certain number of particles with same properties is treated as one CG particle. The CFD is used to model the liquid flow by numerically solving the local-averaged Navier-Stokes equations facilitated with the Volume of Fluid (VOF) model to capture air-core. The results are analyzed in terms of fluid and solid flow structures, and particle-fluid, particle-particle and particle-wall interaction forces. Furthermore, the calculated separation performance is compared with the measurements. The results obtained from the present study indicate that this approach can offer an alternative way to examine the flow and performance of hydrocyclonesKeywords: computational fluid dynamics, discrete element method, hydrocyclone, multiphase flow
Procedia PDF Downloads 4073589 Exploring Language Attrition Through Processing: The Case of Mising Language in Assam
Authors: Chumki Payun, Bidisha Som
Abstract:
The Mising language, spoken by the Mising community in Assam, belongs to the Tibeto-Burman family of languages. This is one of the smaller languages of the region and is facing endangerment due to the dominance of the larger languages, like Assamese. The language is spoken in close in-group scenarios and is gradually losing ground to the dominant languages, partly also due to the education setup where schools use only dominant languages. While there are a number of factors for the current contemporary status of the language, and those can be studied using sociolinguistic tools, the current work aims to contribute to the understanding of language attrition through language processing in order to establish if the effect of second language dominance is more than mere ‘usage’ patterns and has an impact on cognitive strategies. When bilingualism spreads widely in society and results in a language shift, speakers perform people often do better in their second language (L2) than in their first language (L1) across a variety of task settings, in both comprehension and production tasks. This phenomenon was investigated in the case of Mising-Assamese bilinguals, using a picture naming task, in two districts of Jorhat and Tinsukia in Assam, where the relative dominance of L2 is slightly different. This explorative study aimed to investigate if the L2 dominance is visible in their performance and also if the pattern is different in the two different places, thus pointing to the degree of language loss in this case. The findings would have implications for native language education, as education in one’s mother tongue can help reverse the effect of language attrition helping preserve the traditional knowledge system. The hypothesis was that due to the dominance of the L2, subjects’ performance in the task would be better in Assamese than that of Missing. The experiment: Mising-Assamese bilingual participants (age ranges 21-31; N= 20 each from both districts) had to perform a picture naming task in which participants were shown pictures of familiar objects and asked to name them in four scenarios: (a) only in Mising; (b) only in Assamese; (c) a cued mix block: an auditory cue determines the language in which to name the object, and (d) non-cued mix block: participants are not given any specific language cues, but instructed to name the pictures in whichever language they feel most comfortable. The experiment was designed and executed using E-prime 3.0 and was conducted responses were recorded using the help of a Chronos response box and was recorded with the help of a recorder. Preliminary analysis reveals the presence of dominance of L2 over L1. The paper will present a comparison of the response latency, error analysis, and switch cost in L1 and L2 and explain the same from the perspective of language attrition.Keywords: bilingualism, language attrition, language processing, Mising language.
Procedia PDF Downloads 233588 Application of Compressed Sensing Method for Compression of Quantum Data
Authors: M. Kowalski, M. Życzkowski, M. Karol
Abstract:
Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.Keywords: quantum key distribution systems, fiber optic system, compressed sensing
Procedia PDF Downloads 6933587 Enhanced Near-Infrared Upconversion Emission Based Lateral Flow Immunoassay for Background-Free Detection of Avian Influenza Viruses
Authors: Jaeyoung Kim, Heeju Lee, Huijin Jung, Heesoo Pyo, Seungki Kim, Joonseok Lee
Abstract:
Avian influenza viruses (AIV) are the primary cause of highly contagious respiratory diseases caused by type A influenza viruses of the Orthomyxoviridae family. AIV are categorized on the basis of types of surface glycoproteins such as hemagglutinin and neuraminidase. Certain H5 and H7 subtypes of AIV have evolved to the high pathogenic avian influenza (HPAI) virus, which has caused considerable economic loss to the poultry industry and led to severe public health crisis. Several commercial kits have been developed for on-site detection of AIV. However, the sensitivity of these methods is too low to detect low virus concentrations in clinical samples and opaque stool samples. Here, we introduced a background-free near-infrared (NIR)-to-NIR upconversion nanoparticle-based lateral flow immunoassay (NNLFA) platform to yield a sensor that detects AIV within 20 minutes. Ca²⁺ ion in the shell was used to enhance the NIR-to-NIR upconversion photoluminescence (PL) emission as a heterogeneous dopant without inducing significant changes in the morphology and size of the UCNPs. In a mixture of opaque stool samples and gold nanoparticles (GNPs), which are components of commercial AIV LFA, the background signal of the stool samples mask the absorption peak of GNPs. However, UCNPs dispersed in the stool samples still show strong emission centered at 800 nm when excited at 980 nm, which enables the NNLFA platform to detect 10-times lower viral load than a commercial GNP-based AIV LFA. The detection limit of NNLFA for low pathogenic avian influenza (LPAI) H5N2 and HPAI H5N6 viruses was 10² EID₅₀/mL and 10³.⁵ EID₅₀/mL, respectively. Moreover, when opaque brown-colored samples were used as the target analytes, strong NIR emission signal from the test line in NNLFA confirmed the presence of AIV, whereas commercial AIV LFA detected AIV with difficulty. Therefore, we propose that this rapid and background-free NNLFA platform has the potential of detecting AIV in the field, which could effectively prevent the spread of these viruses at an early stage.Keywords: avian influenza viruses, lateral flow immunoassay on-site detection, upconversion nanoparticles
Procedia PDF Downloads 1633586 Successful Rehabilitation of Recalcitrant Knee Pain Due to Anterior Cruciate Ligament Injury Masked by Extensive Skin Graft: A Case Report
Authors: Geum Yeon Sim, Tyler Pigott, Julio Vasquez
Abstract:
A 38-year-old obese female with no apparent past medical history presented with left knee pain. Six months ago, she sustained a left knee dislocation in a motor vehicle accident that was managed with a skin graft over the left lower extremity without any reconstructive surgery. She developed persistent pain and stiffness in her left knee that worsened with walking and stair climbing. Examination revealed healed extensive skin graft over the left lower extremity, including the left knee. Palpation showed moderate tenderness along the superior border of the patella, exquisite tenderness over MCL, and mild tenderness on the tibial tuberosity. There was normal sensation, reflexes, and strength in her lower extremities. There was limited active and passive range of motion of her left knee during flexion. There was instability noted upon the valgus stress test of the left knee. Left knee magnetic resonance imaging showed high-grade (grade 2-3) injury of the proximal superficial fibers of the MCL and diffuse thickening and signal abnormality of the cruciate ligaments, as well as edema-like subchondral marrow signal change in the anterolateral aspect of the lateral femoral condyle weight-bearing surface. There was also notable extensive scarring and edema of the skin, subcutaneous soft tissues, and musculature surrounding the knee. The patient was managed with left knee immobilization for five months, which was complicated by limited knee flexion. Physical therapy consisting of quadriceps, hamstrings, gastrocnemius stretching and strengthening, range of motion exercises, scar/soft tissue mobilization, and gait training was given with marked improvement in pain and range of motion. The patient experienced a further reduction in pain as well as an improvement in function with home exercises consisting of continued strengthening and stretching.Keywords: ligamentous injury, trauma, rehabilitation, knee pain
Procedia PDF Downloads 1083585 A Gradient Orientation Based Efficient Linear Interpolation Method
Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar
Abstract:
This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing
Procedia PDF Downloads 2603584 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers
Authors: Shreyas Srinivas Rangan, Jurgis Porins
Abstract:
The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers
Procedia PDF Downloads 703583 Trust: The Enabler of Knowledge-Sharing Culture in an Informal Setting
Authors: Emmanuel Ukpe, S. M. F. D. Syed Mustapha
Abstract:
Trust in an organization has been perceived as one of the key factors behind knowledge sharing, mainly in an unstructured work environment. In an informal working environment, to instill trust among individuals is a challenge and even more in the virtual environment. The study has contributed in developing the framework for building trust in an unstructured organization in performing knowledge sharing in a virtual environment. The artifact called KAPE (Knowledge Acquisition, Processing, and Exchange) was developed for knowledge sharing for the informal organization where the framework was incorporated. It applies to Cassava farmers to facilitate knowledge sharing using web-based platform. A survey was conducted; data were collected from 382 farmers from 21 farm communities. Multiple regression technique, Cronbach’s Alpha reliability test; Tukey’s Honestly significant difference (HSD) analysis; one way Analysis of Variance (ANOVA), and all trust acceptable measures (TAM) were used to test the hypothesis and to determine noteworthy relationships. The results show a significant difference when there is a trust in knowledge sharing between farmers, the ones who have high in trust acceptable factors found in the model (M = 3.66 SD = .93) and the ones who have low on trust acceptable factors (M = 2.08 SD = .28), (t (48) = 5.69, p = .00). Furthermore, when applying Cognitive Expectancy Theory, the farmers with cognitive-consonance show higher level of trust and satisfaction with knowledge and information from KAPE, as compared with a low level of cognitive-dissonance. These results imply that the adopted trust model KAPE positively improved knowledge sharing activities in an informal environment amongst rural farmers.Keywords: trust, knowledge, sharing, knowledge acquisition, processing and exchange, KAPE
Procedia PDF Downloads 1203582 Modern Agriculture and Industrialization Nexus in the Nigerian Context
Authors: Ese Urhie, Olabisi Popoola, Obindah Gershon, Olabanji Ewetan
Abstract:
Modern agriculture involves the use of improved tools and equipment (instead of crude and ineffective tools) like tractors, hand operated planters, hand operated fertilizer drills and combined harvesters - which increase agricultural productivity. Farmers in Nigeria still have huge potentials to enhance their productivity. The study argues that the increase in agricultural output due to increased productivity, orchestrated by modern agriculture will promote forward linkages and opportunities in the processing sub-sector; both the manufacturing of machines and the processing of raw materials. Depending on existing incentives, foreign investment could be attracted to augment local investment in the sector. The availability of raw materials in large quantity – which prices are competitive – will attract investment in other industries. In addition, potentials for backward linkages will also be created. In a nutshell, adopting the unbalanced growth theory in favour of the agricultural sector could engender industrialization in a country with untapped potentials. The paper highlights the numerous potentials of modern agriculture that are yet to be tapped in Nigeria and also provides a theoretical analysis of how the realization of such potentials could promote industrialization in the country. The study adopts the Lewis’ theory of structural–change model and Hirschman’s theory of unbalanced growth in the design of the analytical framework. The framework will be useful in empirical studies that will guide policy formulation.Keywords: modern agriculture, industrialization, structural change model, unbalanced growth
Procedia PDF Downloads 3033581 Hot Deformation Behavior and Recrystallization of Inconel 718 Superalloy under Double Cone Compression
Authors: Wang Jianguo, Ding Xiao, Liu Dong, Wang Haiping, Yang Yanhui, Hu Yang
Abstract:
The hot deformation behavior of Inconel 718 alloy was studied by uniaxial compression tests under the deformation temperature of 940~1040℃ and strain rate of 0.001-10s⁻¹. The double cone compression (DCC) tests develop strains range from 30% to the 79% strain including all intermediate values of stains at different temperature (960~1040℃). DCC tests were simulated by finite element software which shown the strain and strain rates distribution. The result shows that the peak stress level of the alloy decreased with increasing deformation temperature and decreasing strain rate, which could be characterized by a Zener-Hollomon parameter in the hyperbolic-sine equation. The characterization method of hot processing window containing recrystallization volume fraction and average grain size was proposed for double cone compression test of uniform coarse grain, mixed crystal and uniform fine grain double conical specimen in hydraulic press and screw press. The results show that uniform microstructures can be obtained by low temperature with high deformation followed by high temperature with small deformation on the hydraulic press and low temperature, medium deformation, multi-pass on the screw press. The two methods were applied in industrial forgings process, and the forgings with uniform microstructure were obtained successfully.Keywords: inconel 718 superalloy, hot processing windows, double cone compression, uniform microstructure
Procedia PDF Downloads 2193580 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation
Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier
Abstract:
Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet
Procedia PDF Downloads 2703579 Synthesis and Characterization of Carboxymethyl Cellulose-Chitosan Based Composite Hydrogels for Biomedical and Non-Biomedical Applications
Abstract:
Hydrogels have attracted much academic and industrial attention due to their unique properties and potential biomedical and non-biomedical applications. Limitations on extending their applications have resulted from the synthesis of hydrogels using toxic materials and complex irreproducible processing techniques. In order to promote environmental sustainability, hydrogel efficiency, and wider application, this study focused on the synthesis of composite hydrogels matrices from an edible non-toxic crosslinker-citric acid (CA) using a simple low energy processing method based on carboxymethyl cellulose (CMC) and chitosan (CSN) natural polymers. Composite hydrogels were developed by chemical crosslinking. The results demonstrated that CMC:2CSN:CA exhibited good performance properties and super-absorbency 21× its original weight. This makes it promising for biomedical applications such as chronic wound healing and regeneration, next generation skin substitute, in situ bone regeneration and cell delivery. On the other hand, CMC:CSN:CA exhibited durable well-structured internal network with minimum swelling degrees, water absorbency, excellent gel fraction, and infra-red reflectance. These properties make it a suitable composite hydrogel matrix for warming effect and controlled and efficient release of loaded materials. CMC:2CSN:CA and CMC:CSN:CA composite hydrogels developed also exhibited excellent chemical, morphological, and thermal properties.Keywords: citric acid, fumaric acid, tartaric acid, zinc nitrate hexahydrate
Procedia PDF Downloads 1513578 The Effects of Blanching, Boiling and Steaming on Ascorbic Acid Content, Total Phenolic Content, and Colour in Cauliflowers (Brassica oleracea var. Botrytis)
Authors: Huei Lin Lee, Wee Sim Choo
Abstract:
The effects of blanching, boiling and steaming on the ascorbic acid content, total phenolic content and colour in cauliflower (Brassica oleraceavar. Botrytis) was investigated. It was found that blanching was the best thermal processing to be applied on cauliflower compared to boiling and steaming processes. Blanching and steaming processes on cauliflower retained most of the ascorbic acid content (AAC) compared to those of boiling. As for the total phenolic content (TPC), blanching process retained a higher TPC in cauliflower compared to those of boiling and steaming processes. There were no significant differences between the TPC of boiled and steamed cauliflowers. As for the colour measurement, there were no significant differences in the colour of the cauliflower at different lead time (after processing to the point of consumption) of 30 minutes interval up to 3 hours but there were slight variations in L*, a*, and b* values among the thermal processed cauliflowers (blanched, boiled and steamed). The cauliflowers in this study were found to give a desirable white colour (L* value in the range of 77-83) in all the three thermal processes (blanching, boiling and steaming). There was no significant difference on the effect of lead time (30-minutes interval up to 3 hours) in raw and all the three thermal processed (blanched, boiled and steamed) cauliflowers.Keywords: ascorbic acid, cauliflower, colour, phenolics
Procedia PDF Downloads 3143577 The Effect of Development of Two-Phase Flow Regimes on the Stability of Gas Lift Systems
Authors: Khalid. M. O. Elmabrok, M. L. Burby, G. G. Nasr
Abstract:
Flow instability during gas lift operation is caused by three major phenomena – the density wave oscillation, the casing heading pressure and the flow perturbation within the two-phase flow region. This paper focuses on the causes and the effect of flow instability during gas lift operation and suggests ways to control it in order to maximise productivity during gas lift operations. A laboratory-scale two-phase flow system to study the effects of flow perturbation was designed and built. The apparatus is comprised of a 2 m long by 66 mm ID transparent PVC pipe with air injection point situated at 0.1 m above the base of the pipe. This is the point where stabilised bubbles were visibly clear after injection. Air is injected into the water filled transparent pipe at different flow rates and pressures. The behavior of the different sizes of the bubbles generated within the two-phase region was captured using a digital camera and the images were analysed using the advanced image processing package. It was observed that the average maximum bubbles sizes increased with the increase in the length of the vertical pipe column from 29.72 to 47 mm. The increase in air injection pressure from 0.5 to 3 bars increased the bubble sizes from 29.72 mm to 44.17 mm and then decreasing when the pressure reaches 4 bars. It was observed that at higher bubble velocity of 6.7 m/s, larger diameter bubbles coalesce and burst due to high agitation and collision with each other. This collapse of the bubbles causes pressure drop and reverse flow within two phase flow and is the main cause of the flow instability phenomena.Keywords: gas lift instability, bubbles forming, bubbles collapsing, image processing
Procedia PDF Downloads 4203576 Progress in Combining Image Captioning and Visual Question Answering Tasks
Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima
Abstract:
Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.Keywords: image captioning, visual question answering, deep learning, natural language processing
Procedia PDF Downloads 733575 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution
Authors: Robin Devillers, Chantal van Dijk
Abstract:
Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languagesKeywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development
Procedia PDF Downloads 993574 Explaining the Steps of Designing and Calculating the Content Validity Ratio Index of the Screening Checklist of Preschool Students (5 to 7 Years Old) Exposed to Learning Difficulties
Authors: Sajed Yaghoubnezhad, Sedygheh Rezai
Abstract:
Background and Aim: Since currently in Iran, students with learning disabilities are identified after entering school, and with the approach to the gap between IQ and academic achievement, the purpose of this study is to design and calculate the content validity of the pre-school screening checklist (5-7) exposed to learning difficulties. Methods: This research is a fundamental study, and in terms of data collection method, it is quantitative research with a descriptive approach. In order to design this checklist, after reviewing the research background and theoretical foundations, cognitive abilities (visual processing, auditory processing, phonological awareness, executive functions, spatial visual working memory and fine motor skills) are considered the basic variables of school learning. The basic items and worksheets of the screening checklist of pre-school students 5 to 7 years old with learning difficulties were compiled based on the mentioned abilities and were provided to the specialists in order to calculate the content validity ratio index. Results: Based on the results of the table, the validity of the CVR index of the background information checklist is equal to 0.9, and the CVR index of the performance checklist of preschool children (5 to7 years) is equal to 0.78. In general, the CVR index of this checklist is reported to be 0.84. The results of this study provide good evidence for the validity of the pre-school sieve screening checklist (5-7) exposed to learning difficulties.Keywords: checklist, screening, preschoolers, learning difficulties
Procedia PDF Downloads 1023573 The Implementation of an E-Government System in Developing Countries: A Case of Taita Taveta County, Kenya
Authors: Tabitha Mberi, Tirus Wanyoike, Joseph Sevilla
Abstract:
The use of Information and Communication Technology (ICT) in Government is gradually becoming a major requirement to transform delivery of services to its stakeholders by improving quality of service and efficiency. In Kenya, the devolvement of government from local authorities to county governments has resulted in many counties adopting online revenue collection systems which can be easily accessed by its stakeholders. Strathmore Research and Consortium Centre (SRCC) implemented a revenue collection system in Taita Taveta, a County in coastal Kenya. It consisted of two systems that are integrated; an online system dubbed “CountyPro” for processing county services such as Business Permit applications, General Billing, Property Rates Payments and any other revenue streams from the county. The second part was a Point of Sale(PoS) system used by the county revenue collectors to charge for market fees and vehicle parking fees. This study assesses the success and challenges in adoption of the integrated system. Qualitative and quantitative data collection methods were used to collect data on the adoption of the system with the researcher using focus groups, interviews, and questionnaires to collect data from various users of the system An analysis was carried out and revealed that 87% of the county revenue officers who are situated in county offices describe the system as efficient and has made their work easier in terms of processing of transactions for customers.Keywords: e-government, counties, information technology, online system, point of sale
Procedia PDF Downloads 2473572 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services
Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme
Abstract:
Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing
Procedia PDF Downloads 1113571 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1203570 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 473569 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 143