Search results for: signal reconstruction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2205

Search results for: signal reconstruction

1365 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation

Procedia PDF Downloads 320
1364 Hydroxyapatite-Chitosan Composites for Tissue Engineering Applications

Authors: Georgeta Voicu, Cristina Daniela Ghitulica, Andreia Cucuruz, Cristina Busuioc

Abstract:

In the field of tissue engineering, the compositional and microstructural features of the employed materials play an important role, with implications on the mechanical and biological behaviour of the medical devices. In this context, the development of apatite - natural biopolymer composites represents a choice of many scientific groups. Thus, hydroxyapatite powders were synthesized by a wet method, namely co-precipitation, starting from high purity reagents (CaO, MgO, and H3PO4). Moreover, the substitution of calcium with magnesium have been approached, in the 5 - 10 wt.% range. Afterward, the phosphate powders were integrated in two types of composites with chitosan, different from morphological point of view. First, 3D porous scaffolds were obtained by a freeze-drying procedure. Second, uniform, compact films were achieved by film casting. The influence of chitosan molecular weight (low, medium and high), as well as apatite powder to polymer ratio (1:1 and 1:2) on the morphological properties, were analysed in detail. In conclusion, the reported biocomposites, prepared by a straightforward route are suitable for bone substitution or repairing applications.

Keywords: bone reconstruction, chitosan, composite scaffolds, hydroxyapatite

Procedia PDF Downloads 312
1363 Persistence of DNA on Clothes Contaminated by Semen Stains after Washing

Authors: Ashraf Shebl, Bassam Garah, Radah Youssef

Abstract:

Sexual assault is usually a hidden crime where the only witnesses are the victim and the assailant. For a variety of reasons, even the victim may be unable to provide a detailed account of the assault or the identity of the perpetrator. Often the case history deteriorates into one person’s word against another. With such limited initial information, the physical and biological evidence collected from the victim, from the crime scene, and from the suspect will play a pivotal role in the objective and scientific reconstruction of the events in question. The aim of work is to examine whether DNA profiles could be recovered from repeated washed clothes after contaminated by semen stains. Fresh semen about 1ml. ( <1 h old) taken from donor was deposited on four types of clothes (cotton, silk, polyester, and jeans). Then leave to dry in room temperature and washed by washing machine at temperature (30°C-60°C) and by hand washing. Some items of clothing were washed once, some twice and others three times. DNA could be extracted from some of these samples even after multiple washing. This study demonstrates that complete DNA profiles can be obtained from washed semen stains on different types of clothes, even after many repeated washing. These results indicated that clothes of the victims must be examined even if they were washed many times.

Keywords: sexual assault, DNA, persistence, clothes

Procedia PDF Downloads 182
1362 Differential Expression of GABA and Its Signaling Components in Ulcerative Colitis and Irritable Bowel Syndrome Pathogenesis

Authors: Surbhi Aggarwal, Jaishree Paul

Abstract:

Background: Role of GABA has been implicated in autoimmune diseases like multiple sclerosis, type1 diabetes and rheumatoid arthritis where they modulate the immune response but role in gut inflammation has not been defined. Ulcerative colitis (UC) and diarrhoeal predominant irritable bowel syndrome (IBS-D) both involve inflammation of gastrointestinal tract. UC is a chronic, relapsing and idiopathic inflammation of gut. IBS is a common functional gastrointestinal disorder characterised by abdominal pain, discomfort and alternating bowel habits. Mild inflammation is known to occur in IBS-D. Aim: Aim of this study was to investigate the role of GABA in UC as well as in IBS-D. Materials and methods: Blood and biopsy samples from UC, IBS-D and controls were collected. ELISA was used for measuring level of GABA in serum of UC, IBS-D and controls. RT-PCR analysis was done to determine GABAergic signal system in colon biopsy of UC, IBS-D and controls. RT-PCR was done to check the expression of proinflammatory cytokines. CurveExpert 1.4, Graphpad prism-6 software were used for data analysis. Statistical analysis was done by unpaired, two-way student`s t-test. All sets of data were represented as mean± SEM. A probability level of p < 0.05 was considered statistically significant. Results and conclusion: Significantly decreased level of GABA and altered GABAergic signal system was detected in UC and IBS-D as compared to controls. Significantly increased expression of proinflammatory cytokines was also determined in UC and IBS-D as compared to controls. Hence we conclude that insufficient level of GABA in UC and IBS-D leads to overproduction of proinflammatory cytokines which further contributes to inflammation. GABA may be used as a promising therapeutic target for treatment of gut inflammation or other inflammatory diseases.

Keywords: diarrheal predominant irritable bowel syndrome, γ-aminobutyric acid (GABA), inflammation, ulcerative colitis

Procedia PDF Downloads 215
1361 Speech and Swallowing Function after Tonsillo-Lingual Sulcus Resection with PMMC Flap Reconstruction: A Case Study

Authors: K. Rhea Devaiah, B. S. Premalatha

Abstract:

Background: Tonsillar Lingual sulcus is the area between the tonsils and the base of the tongue. The surgical resection of the lesions in the head and neck results in changes in speech and swallowing functions. The severity of the speech and swallowing problem depends upon the site and extent of the lesion, types and extent of surgery and also the flexibility of the remaining structures. Need of the study: This paper focuses on the importance of speech and swallowing rehabilitation in an individual with the lesion in the Tonsillar Lingual Sulcus and post-operative functions. Aim: Evaluating the speech and swallow functions post-intensive speech and swallowing rehabilitation. The objectives are to evaluate the speech intelligibility and swallowing functions after intensive therapy and assess the quality of life. Method: The present study describes a report of an individual aged 47years male, with the diagnosis of basaloid squamous cell carcinoma, left tonsillar lingual sulcus (pT2n2M0) and underwent wide local excision with left radical neck dissection with PMMC flap reconstruction. Post-surgery the patient came with a complaint of reduced speech intelligibility, and difficulty in opening the mouth and swallowing. Detailed evaluation of the speech and swallowing functions were carried out such as OPME, articulation test, speech intelligibility, different phases of swallowing and trismus evaluation. Self-reported questionnaires such as SHI-E(Speech handicap Index- Indian English), DHI (Dysphagia handicap Index) and SESEQ -K (Self Evaluation of Swallowing Efficiency in Kannada) were also administered to know what the patient feels about his problem. Based on the evaluation, the patient was diagnosed with pharyngeal phase dysphagia associated with trismus and reduced speech intelligibility. Intensive speech and swallowing therapy was advised weekly twice for the duration of 1 hour. Results: Totally the patient attended 10 intensive speech and swallowing therapy sessions. Results indicated misarticulation of speech sounds such as lingua-palatal sounds. Mouth opening was restricted to one finger width with difficulty chewing, masticating, and swallowing the bolus. Intervention strategies included Oro motor exercise, Indirect swallowing therapy, usage of a trismus device to facilitate mouth opening, and change in the food consistency to help to swallow. A practice session was held with articulation drills to improve the production of speech sounds and also improve speech intelligibility. Significant changes in articulatory production and speech intelligibility and swallowing abilities were observed. The self-rated quality of life measures such as DHI, SHI and SESE Q-K revealed no speech handicap and near-normal swallowing ability indicating the improved QOL after the intensive speech and swallowing therapy. Conclusion: Speech and swallowing therapy post carcinoma in the tonsillar lingual sulcus is crucial as the tongue plays an important role in both speech and swallowing. The role of Speech-language and swallowing therapists in oral cancer should be highlighted in treating these patients and improving the overall quality of life. With intensive speech-language and swallowing therapy post-surgery for oral cancer, there can be a significant change in the speech outcome and swallowing functions depending on the site and extent of lesions which will thereby improve the individual’s QOL.

Keywords: oral cancer, speech and swallowing therapy, speech intelligibility, trismus, quality of life

Procedia PDF Downloads 98
1360 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea

Authors: K. S. Sreejith, C. Shaji

Abstract:

Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.

Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis

Procedia PDF Downloads 269
1359 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 490
1358 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method

Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito

Abstract:

In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.

Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.

Procedia PDF Downloads 487
1357 Sexual Orientation, Household Labour Division and the Motherhood Wage Penalty

Authors: Julia Hoefer Martí

Abstract:

While research has consistently found a significant motherhood wage penalty for heterosexual women, where homosexual women are concerned, evidence has appeared to suggest no effect, or possibly even a wage bonus. This paper presents a model of the household with a public good that requires both a monetary expense and a labour investment, and where the household budget is shared between partners. Lower-wage partners will do relatively more of the household labour while higher-wage partners will specialise in market labour, and the arrival of a child exacerbates this split, resulting in the lower-wage partner taking on even more of the household labour in relative terms. Employers take this gender-sexuality dyad as a signal for employees’ commitment to the labour market after having a child, and use the information when setting wages after employees become parents. Given that women empirically earn lower wages than men, in a heterosexual couple the female partner will often do more of the household labour. However, as not every female partner has a lower wage, this results in an over-adjustment of wages that manifests as an unexplained motherhood wage penalty. On the other hand, in homosexual couples wage distributions are ex ante identical, and gender is no longer a useful signal to employers as to whether the partner is likely to specialise in household labour or market labour. This model is then tested using longitudinal data from the EU Standards of Income and Living Conditions (EU-SILC) to investigate the hypothesis that women experience different wage effects of motherhood depending on their sexual orientation. While heterosexual women receive a significant motherhood wage penalty of 8-10%, homosexual mothers do not receive any significant wage bonus or penalty of motherhood, consistent with the hypothesis presented above.

Keywords: discrimination, gender, motherhood, sexual orientation, labor economics

Procedia PDF Downloads 152
1356 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 84
1355 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 334
1354 Decomposition of Third-Order Discrete-Time Linear Time-Varying Systems into Its Second- and First-Order Pairs

Authors: Mohamed Hassan Abdullahi

Abstract:

Decomposition is used as a synthesis tool in several physical systems. It can also be used for tearing and restructuring, which is large-scale system analysis. On the other hand, the commutativity of series-connected systems has fascinated the interest of researchers, and its advantages have been emphasized in the literature. The presentation looks into the necessary conditions for decomposing any third-order discrete-time linear time-varying system into a commutative pair of first- and second-order systems. Additional requirements are derived in the case of nonzero initial conditions. MATLAB simulations are used to verify the findings. The work is unique and is being published for the first time. It is critical from the standpoints of synthesis and/or design. Because many design techniques in engineering systems rely on tearing and reconstruction, this is the process of putting together simple components to create a finished product. Furthermore, it is demonstrated that regarding sensitivity to initial conditions, some combinations may be better than others. The results of this work can be extended for the decomposition of fourth-order discrete-time linear time-varying systems into lower-order commutative pairs, as two second-order commutative subsystems or one first-order and one third-order commutative subsystems.

Keywords: commutativity, decomposition, discrete time-varying systems, systems

Procedia PDF Downloads 97
1353 Interpretation as Ontological Determination and Negotiation

Authors: Nicolas Cuevas-Alvear

Abstract:

The subject of this paper is the concept of interpretation. Its purpose is to expose the need for a new concept of interpretation and to trace the construction route of interpretation as determination and negotiation. The thesis it defends is that interpretation is the determination of events and the negotiation of those determinations in communication. To meet its objective, this manuscript is divided into five sections. The first section introduces the subject and the need for a new concept of interpretation. The second section explicitly formulates the research questions and the objectives of the project for the construction of a new concept of interpretation. The third section presents the state of the art, specifically, the theory of Radical interpretation proposed by Donald Davidson and the theory of the Hermeneutic Circle proposed by Hans Georg Gadamer. In addition, in this section, there is a reconstruction of Ernst Cassirer's explanation of language as a symbolic form. The fourth section is an explanation of the proposal based on the theories presented. Specifically: language as a symbolic form explains interpretation as a determination of events using objective, subjective and intersubjective elements, and these three elements are negotiated in interpretation as communication. The last section is the bibliography proposed to carry out the project.

Keywords: interpretation, metaphysics, semantics, Donald Davidson, ERNST Cassirer

Procedia PDF Downloads 183
1352 Seismic Vulnerability Mitigation of Non-Engineered Buildings

Authors: Muhammad Tariq A. Chaudhary

Abstract:

The tremendous loss of life that resulted in the aftermath of recent earthquakes in developing countries is mostly due to the collapse of non-engineered and semi-engineered building structures. Such structures are used as houses, schools, primary healthcare centres and government offices. These building are classified structurally into two categories viz. non-engineered and semi-engineered. Non-engineered structures include: adobe, Unreinforced Masonry (URM) and wood buildings. Semi-engineered buildings are mostly low-rise (up to 3 story) light concrete frame structures or masonry bearing walls with reinforced concrete slab. This paper presents an overview of the typical damage observed in non-engineered structures and their most likely causes in the past earthquakes with specific emphasis on the performance of such structures in the 2005 Kashmir earthquake. It is demonstrated that seismic performance of these structures can be improved from life-safety viewpoint by adopting simple low-cost modifications to the existing construction practices. Incorporation of some of these practices in the reconstruction efforts after the 2005 Kashmir earthquake are examined in the last section for mitigating seismic risk hazard.

Keywords: Kashmir earthquake, non-engineered buildings, seismic hazard, structural details, structural strengthening

Procedia PDF Downloads 279
1351 Signature Bridge Design for the Port of Montreal

Authors: Juan Manuel Macia

Abstract:

The Montreal Port Authority (MPA) wanted to build a new road link via Souligny Avenue to increase the fluidity of goods transported by truck in the Viau Street area of Montreal and to mitigate the current traffic problems on Notre-Dame Street. With the purpose of having a better integration and acceptance of this project with the neighboring residential surroundings, this project needed to include an architectural integration, bringing some artistic components to the bridge design along with some landscaping components. The MPA is required primarily to provide direct truck access to Port of Montreal with a direct connection to the future Assomption Boulevard planned by the City of Montreal and, thus, direct access to Souligny Avenue. The MPA also required other key aspects to be considered for the proposal and development of the project, such as the layout of road and rail configurations, the reconstruction of underground structures, the relocation of power lines, the installation of lighting systems, the traffic signage and communication systems improvement, the construction of new access ramps, the pavement reconstruction and a summary assessment of the structural capacity of an existing service tunnel. The identification of the various possible scenarios began by identifying all the constraints related to the numerous infrastructures located in the area of the future link between the port and the future extension of Souligny Avenue, involving interaction with several disciplines and technical specialties. Several viaduct- and tunnel-type geometries were studied to link the port road to the right-of-way north of Notre-Dame Street and to improve traffic flow at the railway corridor. The proposed design took into account the existing access points to Port of Montreal, the built environment of the MPA site, the provincial and municipal rights-of-way, and the future Notre-Dame Street layout planned by the City of Montreal. These considerations required the installation of an engineering structure with a span of over 60 m to free up a corridor for the future urban fabric of Notre-Dame Street. The best option for crossing this span length was identified by the design and construction of a curved bridge over Notre-Dame Street, which is essentially a structure with a deck formed by a reinforced concrete slab on steel box girders with a single span of 63.5m. The foundation units were defined as pier-cap type abutments on drilled shafts to bedrock with rock sockets, with MSE-type walls at the approaches. The configuration of a single-span curved structure posed significant design and construction challenges, considering the major constraints of the project site, a design for durability approach, and the need to guarantee optimum performance over a 75-year service life in accordance with the client's needs and the recommendations and requirements defined by the standards used for the project. These aspects and the need to include architectural and artistic components in this project made it possible to design, build, and integrate a signature infrastructure project with a sustainable approach, from which the MPA, the commuters, and the city of Montreal and its residents will benefit.

Keywords: curved bridge, steel box girder, medium span, simply supported, industrial and urban environment, architectural integration, design for durability

Procedia PDF Downloads 51
1350 Protection of Cultural Heritage against the Effects of Climate Change Using Autonomous Aerial Systems Combined with Automated Decision Support

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents an ongoing work in research projects such as SCAN4RECO or ARCH, both funded by the European Commission under Horizon 2020 program. The former one concerns multimodal and multispectral scanning of Cultural Heritage assets for their digitization and conservation via spatiotemporal reconstruction and 3D printing, while the latter one aims to better preserve areas of cultural heritage from hazards and risks. It co-creates tools that would help pilot cities to save cultural heritage from the effects of climate change. It develops a disaster risk management framework for assessing and improving the resilience of historic areas to climate change and natural hazards. Tools and methodologies are designed for local authorities and practitioners, urban population, as well as national and international expert communities, aiding authorities in knowledge-aware decision making. In this article we focus on 3D modelling of object geometry using primarily photogrammetric methods to achieve very high model accuracy using consumer types of devices, attractive both to professions and hobbyists alike.

Keywords: 3D modelling, UAS, cultural heritage, preservation

Procedia PDF Downloads 113
1349 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 88
1348 Fluorescing Aptamer-Gold Nanoparticle Complex for the Sensitive Detection of Bisphenol A

Authors: Eunsong Lee, Gae Baik Kim, Young Pil Kim

Abstract:

Bisphenol A (BPA) is one of the endocrine disruptors (EDCs), which have been suspected to be associated with reproductive dysfunction and physiological abnormality in human. Since the BPA has been widely used to make plastics and epoxy resins, the leach of BPA from the lining of plastic products has been of major concern, due to its environmental or human exposure issues. The simple detection of BPA based on the self-assembly of aptamer-mediated gold nanoparticles (AuNPs) has been reported elsewhere, yet the detection sensitivity still remains challenging. Here we demonstrate an improved AuNP-based sensor of BPA by using fluorescence-combined AuNP colorimetry in order to overcome the drawback of traditional AuNP sensors. While the anti-BPA aptamer (full length or truncated ssDNA) triggered the self-assembly of unmodified AuNP (citrate-stabilized AuNP) in the presence of BPA at high salt concentrations, no fluorescence signal was observed by the subsequent addition of SYBR Green, due to a small amount of free anti-BPA aptamer. In contrast, the absence of BPA did not cause the self-assembly of AuNPs (no color change by salt-bridged surface stabilization) and high fluorescence signal by SYBP Green, which was due to a large amount of free anti-BPA aptamer. As a result, the quantitative analysis of BPA was achieved using the combination of absorption of AuNP with fluorescence intensity of SYBR green as a function of BPA concentration, which represented more improved detection sensitivity (as low as 1 ppb) than did in the AuNP colorimetric analysis. This method also enabled to detect high BPA in water-soluble extracts from thermal papers with high specificity against BPS and BPF. We suggest that this approach will be alternative for traditional AuNP colorimetric assays in the field of aptamer-based molecular diagnosis.

Keywords: bisphenol A, colorimetric, fluoroscence, gold-aptamer nanobiosensor

Procedia PDF Downloads 176
1347 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 35
1346 Cultivating Responsible AI: For Cultural Heritage Preservation in India

Authors: Varsha Rainson

Abstract:

Artificial intelligence (AI) has great potential and can be used as a powerful tool of application in various domains and sectors. But with the application of AI, there comes a wide spectrum of concerns around bias, accountability, transparency, and privacy. Hence, there is a need for responsible AI, which can uphold ethical and accountable practices to ensure that things are transparent and fair. The paper is a combination of AI and cultural heritage preservation, with a greater focus on India because of the rich cultural legacy that it holds. India’s cultural heritage in itself contributes to its identity and the economy. In this paper, along with discussing the impact culture holds on the Indian economy, we will discuss the threats that the cultural heritage is exposed to due to pollution, climate change and urbanization. Furthermore, the paper reviews some of the exciting applications of AI in cultural heritage preservation, such as 3-D scanning, photogrammetry, and other techniques which have led to the reconstruction of cultural artifacts and sites. The paper eventually moves into the potential risks and challenges that AI poses in cultural heritage preservation. These include ethical, legal, and social issues which are to be addressed by organizations and government authorities. Overall, the paper strongly argues the need for responsible AI and the important role it can play in preserving India’s cultural heritage while holding importance to value and diversity.

Keywords: responsible AI, cultural heritage, artificial intelligence, biases, transparency

Procedia PDF Downloads 171
1345 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD

Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.

Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio

Procedia PDF Downloads 359
1344 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization

Authors: Subhajit Das, Nirjhar Dhang

Abstract:

Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.

Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization

Procedia PDF Downloads 205
1343 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva

Authors: Sevde Altuntas, Fatih Buyukserin

Abstract:

Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.

Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy

Procedia PDF Downloads 282
1342 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 150
1341 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 97
1340 Potential of Mineral Composition Reconstruction for Monitoring the Performance of an Iron Ore Concentration Plant

Authors: Maryam Sadeghi, Claude Bazin, Daniel Hodouin, Laura Perez Barnuevo

Abstract:

The performance of a separation process is usually evaluated using performance indices calculated from elemental assays readily available from the chemical analysis laboratory. However, the separation process performance is essentially related to the properties of the minerals that carry the elements and not those of the elements. Since elements or metals can be carried by valuable and gangue minerals in the ore and that each mineral responds differently to a mineral processing method, the use of only elemental assays could lead to erroneous or uncertain conclusions on the process performance. This paper discusses the advantages of using performance indices calculated from minerals content, such as minerals recovery, for process performance assessments. A method is presented that uses elemental assays to estimate the minerals content of the solids in various process streams. The method combines the stoichiometric composition of the minerals and constraints of mass conservation for the minerals through the concentration process to estimate the minerals content from elemental assays. The advantage of assessing a concentration process using mineral based performance indices is illustrated for an iron ore concentration circuit.

Keywords: data reconciliation, iron ore concentration, mineral composition, process performance assessment

Procedia PDF Downloads 200
1339 Virtual Chemistry Laboratory as Pre-Lab Experiences: Stimulating Student's Prediction Skill

Authors: Yenni Kurniawati

Abstract:

Students Prediction Skill in chemistry experiments is an important skill for pre-service chemistry students to stimulate students reflective thinking at each stage of many chemistry experiments, qualitatively and quantitatively. A Virtual Chemistry Laboratory was designed to give students opportunities and times to practicing many kinds of chemistry experiments repeatedly, everywhere and anytime, before they do a real experiment. The Virtual Chemistry Laboratory content was constructed using the Model of Educational Reconstruction and developed to enhance students ability to predicted the experiment results and analyzed the cause of error, calculating the accuracy and precision with carefully in using chemicals. This research showed students changing in making a decision and extremely beware with accuracy, but still had a low concern in precision. It enhancing students level of reflective thinking skill related to their prediction skill 1 until 2 stage in average. Most of them could predict the characteristics of the product in experiment, and even the result will going to be an error. In addition, they take experiments more seriously and curiously about the experiment results. This study recommends for a different subject matter to provide more opportunities for students to learn about other kinds of chemistry experiments design.

Keywords: virtual chemistry laboratory, chemistry experiments, prediction skill, pre-lab experiences

Procedia PDF Downloads 328
1338 Short Term Distribution Load Forecasting Using Wavelet Transform and Artificial Neural Networks

Authors: S. Neelima, P. S. Subramanyam

Abstract:

The major tool for distribution planning is load forecasting, which is the anticipation of the load in advance. Artificial neural networks have found wide applications in load forecasting to obtain an efficient strategy for planning and management. In this paper, the application of neural networks to study the design of short term load forecasting (STLF) Systems was explored. Our work presents a pragmatic methodology for short term load forecasting (STLF) using proposed two-stage model of wavelet transform (WT) and artificial neural network (ANN). It is a two-stage prediction system which involves wavelet decomposition of input data at the first stage and the decomposed data with another input is trained using a separate neural network to forecast the load. The forecasted load is obtained by reconstruction of the decomposed data. The hybrid model has been trained and validated using load data from Telangana State Electricity Board.

Keywords: electrical distribution systems, wavelet transform (WT), short term load forecasting (STLF), artificial neural network (ANN)

Procedia PDF Downloads 422
1337 A Structure-Switching Electrochemical Aptasensor for Rapid, Reagentless and Single-Step, Nanomolar Detection of C-Reactive Protein

Authors: William L. Whitehouse, Louisa H. Y. Lo, Andrew B. Kinghorn, Simon C. C. Shiu, Julian. A. Tanner

Abstract:

C-reactive protein (CRP) is an acute-phase reactant and sensitive indicator for sepsis and other life-threatening pathologies, including systemic inflammatory response syndrome (SIRS). Currently, clinical turn-around times for established CRP detection methods take between 30 minutes to hours or even days from centralized laboratories. Here, we report the development of an electrochemical biosensor using redox probe-tagged DNA aptamers functionalized onto cheap, commercially available screen-printed electrodes. Binding-induced conformational switching of the CRP-targeting aptamer induces a specific and selective signal-ON event, which enables single-step and reagentless detection of CRP in as little as 1 minute. The aptasensor dynamic range spans 5-1000nM (R=0.97) or 5-500nM (R=0.99) in 50% diluted human serum, with a LOD of 3nM, corresponding to 2-orders of magnitude sensitivity under the clinically relevant cut-off for CRP. The sensor is stable for up to one week and can be reused numerous times, as judged from repeated real-time dosing and dose-response assays. By decoupling binding events from the signal induction mechanism, structure-switching electrochemical aptamer-based sensors (SS-EABs) provide considerable advantages over their adsorption-based counterparts. Our work expands on the retinue of such sensors reported in the literature and is the first instance of an SS-EAB for reagentless CRP detection. We hope this study can inspire further investigations into the suitability of SS-EABs for diagnostics, which will aid translational R&D toward fully realized devices aimed at point-of-care applications or for use more broadly by the public.

Keywords: structure-switching, C-reactive protein, electrochemical, biosensor, aptasensor.

Procedia PDF Downloads 50
1336 Analysis Of Non-uniform Characteristics Of Small Underwater Targets Based On Clustering

Authors: Tianyang Xu

Abstract:

Small underwater targets generally have a non-centrosymmetric geometry, and the acoustic scattering field of the target has spatial inhomogeneity under active sonar detection conditions. In view of the above problems, this paper takes the hemispherical cylindrical shell as the research object, and considers the angle continuity implied in the echo characteristics, and proposes a cluster-driven research method for the non-uniform characteristics of target echo angle. First, the target echo features are extracted, and feature vectors are constructed. Secondly, the t-SNE algorithm is used to improve the internal connection of the feature vector in the low-dimensional feature space and to construct the visual feature space. Finally, the implicit angular relationship between echo features is extracted under unsupervised condition by cluster analysis. The reconstruction results of the local geometric structure of the target corresponding to different categories show that the method can effectively divide the angle interval of the local structure of the target according to the natural acoustic scattering characteristics of the target.

Keywords: underwater target;, non-uniform characteristics;, cluster-driven method;, acoustic scattering characteristics

Procedia PDF Downloads 109