Search results for: neural tube defects
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2918

Search results for: neural tube defects

398 Nanoparticles in Drug Delivery and Therapy of Alzeheimer's Disease

Authors: Nirupama Dixit, Anyaa Mittal, Neeru Sood

Abstract:

Alzheimer’s disease (AD) is a progressive form of dementia, contributing to up to 70% of cases, mostly observed in elderly but is not restricted to old age. The pathophysiology of the disease is characterized by specific pathological changes in brain. The changes (i.e. accumulation of metal ions in brain, formation of extracellular β-amyloid (Aβ) peptide aggregates and tangle of hyper phosphorylated Tau protein inside neurons) damage the neuronal connections irreversibly. The current issues in improvement of life quality of Alzheimer's patient lies in the fact that the diagnosis is made at a late stage of the disease and the medications do not treat the basic causes of Alzheimer's. The targeted delivery of drug through the blood brain barrier (BBB) poses several limitations via traditional approaches for treatment. To overcome these drug delivery limitation, nanoparticles provide a promising solution. This review focuses on current strategies for efficient targeted drug delivery using nanoparticles and improving the quality of therapy provided to the patient. Nanoparticles can be used to encapsulate drug (which is generally hydrophobic) to ensure its passage to brain; they can be conjugated to metal ion chelators to reduce the metal load in neural tissue thus lowering the harmful effects of oxidative damage; can be conjugated with drug and monoclonal antibodies against BBB endogenous receptors. Finally this review covers how the nanoparticles can play a role in diagnosing the disease.

Keywords: Alzheimer's disease, β-amyloid plaques, blood brain barrier, metal chelators, nanoparticles

Procedia PDF Downloads 467
397 AutoML: Comprehensive Review and Application to Engineering Datasets

Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili

Abstract:

The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.

Keywords: automated machine learning, uncertainty, engineering dataset, regression

Procedia PDF Downloads 38
396 Extraction and Quantification of Peramine Present in Dalaca pallens, a Pest of Grassland in Southtern Chile

Authors: Leonardo Parra, Daniel Martínez, Jorge Pizarro, Fernando Ortega, Manuel Chacón-Fuentes, Andrés Quiroz

Abstract:

Control of Dalaca pallens or blackworms, one of the most important hypogeous pest in grassland in southern Chile, is based on the use of broad-spectrum insecticides such as organophosphates and pyrethroids. However, the rapid development of insecticide resistance in field populations of this insect and public concern over the environmental impact of these insecticides has resulted in the search for other control methods. Specifically, the use of endophyte fungi for controlling pest has emerged as an interesting and promising strategy. Endophytes from ryegrass (Lolium perenne), establish a biotrophic relationship with the host, defined as mutualistic symbiosis. The plant-fungi association produces alkaloids where peramine is the main toxic substance against Listronotus bonariensis, the most important epigean pest of ryegrass. Nevertheless, the effect of peramina on others pest insects, such as D. pallens, to our knowledge has not been studied, and also its possible metabolization in the body of the larvae. Therefore, we addressed the following research question: Do larvae of D. pallens store peramine after consumption of ryegrass endophyte infected (E+)? For this, specimens of blackworms were fed with ryegrass plant of seven experimental lines and one commercial cultivar endophyte free (E-) sown at the Instituto de Investigaciones Agropecuarias Carillanca (Vilcún, Chile). Once the feeding period was over, ten larvae of each treatment were examined. Individuals were dissected, and their gut was removed to exclude any influence of remaining material. The rest of the larva's body was dried at 60°C by 24-48 h and ground into a fine powder using a mortar. 25 mg of dry powder was transferred to a microcentrifuge tube and extracted in 1 mL of a mixture of methanol:water:formic acid. Then, the samples were centrifuged at 16,000 rpm for 3 min, and the supernatant was colected and injected in the liquid chromatography of high resolution (HPLC). The results confirmed the presence of peramine in the larva's body of D. pallens. The insects that fed the experimental lines LQE-2 and LQE-6 were those where peramine was present in high proportion (0.205 and 0.199 ppm, respectively); while LQE-7 and LQE-3 obtained the lowest concentrations of the alkaloid (0.047 and 0.053 ppm, respectively). Peramine was not detected in the insects when the control cultivar Jumbo (E-) was tested. These results evidenced the storage and metabolism of peramine during consumption of the larvae. However, the effect of this alkaloid present in 'future ryegrass cultivars' (LQE-2 and LQE-6) on the performance and survival of blackworms must be studied and confirmed experimentally.

Keywords: blackworms, HPLC, alkaloid, pest

Procedia PDF Downloads 278
395 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 58
394 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform

Authors: Reza Mohammadzadeh

Abstract:

The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.

Keywords: data model, geotechnical risks, machine learning, underground coal mining

Procedia PDF Downloads 247
393 Hydrogen Induced Fatigue Crack Growth in Pipeline Steel API 5L X65: A Combined Experimental and Modelling Approach

Authors: H. M. Ferreira, H. Cockings, D. F. Gordon

Abstract:

Climate change is driving a transition in the energy sector, with low-carbon energy sources such as hydrogen (H2) emerging as an alternative to fossil fuels. However, the successful implementation of a hydrogen economy requires an expansion of hydrogen production, transportation and storage capacity. The costs associated with this transition are high but can be partly mitigated by adapting the current oil and natural gas networks, such as pipeline, an important component of the hydrogen infrastructure, to transport pure or blended hydrogen. Steel pipelines are designed to withstand fatigue, one of the most common causes of pipeline failure. However, it is well established that some materials, such as steel, can fail prematurely in service when exposed to hydrogen-rich environments. Therefore, it is imperative to evaluate how defects (e.g. inclusions, dents, and pre-existing cracks) will interact with hydrogen under cyclic loading and, ultimately, to what extent hydrogen induced failure will limit the service conditions of steel pipelines. This presentation will explore how the exposure of API 5L X65 to a hydrogen-rich environment and cyclic loads will influence its susceptibility to hydrogen induced failure. That evaluation will be performed by a combination of several techniques such as hydrogen permeation testing (ISO 17081:2014), fatigue crack growth (FCG) testing (ISO 12108:2018 and AFGROW modelling), combined with microstructural and fractographic analysis. The development of a FCG test setup coupled with an electrochemical cell will be discussed, along with the advantages and challenges of measuring crack growth rates in electrolytic hydrogen environments. A detailed assessment of several electrolytic charging conditions will also be presented, using hydrogen permeation testing as a method to correlate the different charging settings to equivalent hydrogen concentrations and effective diffusivity coefficients, not only on the base material but also on the heat affected zone and weld of the pipelines. The experimental work is being complemented with AFGROW, a useful FCG modelling software that has helped inform testing parameters and which will also be developed to ultimately help industry experts perform structural integrity analysis and remnant life characterisation of pipeline steels under representative conditions. The results from this research will allow to conclude if there is an acceleration of the crack growth rate of API 5L X65 under the influence of a hydrogen-rich environment, an important aspect that needs to be rectified instandards and codes of practice on pipeline integrity evaluation and maintenance.

Keywords: AFGROW, electrolytic hydrogen charging, fatigue crack growth, hydrogen, pipeline, steel

Procedia PDF Downloads 74
392 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 134
391 Microstructure and Mechanical Properties of Nb: Si: (a-C) Thin Films Prepared Using Balanced Magnetron Sputtering System

Authors: Sara Khamseh, Elahe Sharifi

Abstract:

321 alloy steel is austenitic stainless steel with high oxidation resistance and is commonly used to fabricate heat exchangers and steam generators. However, the low hardness and weak tribological performance can cause dangerous failures during industrial operations. The well-designed protective coatings on 321 alloy steel surfaces with high hardness and good tribological performance can guarantee their safe applications. The surface protection of metal substrates using protective coatings showed high efficiency in prevailing these problems. Carbon-based multicomponent coatings, such as metal-added amorphous carbon coatings, are crucially necessary because of their remarkable mechanical and tribological performances. In the current study, (Nb: Si: a-C) multicomponent coatings (a-C: amorphous carbon) were coated on 321 alloys using a balanced magnetron (BM) sputtering system at room temperature. The effects of the Si/Nb ratio on microstructure, mechanical and tribological characteristics of (Nb: Si: a-C) composite coatings were investigated. The XRD and Raman analysis results showed that the coatings formed a composite structure of cubic diamond (C-D), NbC, and graphite-like carbon (GLC). The NbC phase's abundance decreased when the C-D phase's affluence increased with an increasing Si/Nb ratio. The coatings' indentation hardness and plasticity index (H³/E² ratio) increased with an increasing Si/Nb ratio. The better mechanical properties of the coatings with higher Si content can be attributed to the higher cubic diamond (C-D) content. The cubic diamond (C-D) is a challenging phase and can positively affect the mechanical performance of the coatings. It is well documented that in hard protective coatings, Si encourages amorphization. In addition, THE studies showed that Nb and Mo can act as a catalyst for nucleation and growth of hard cubic (C-D) and hexagonal (H-D) diamond phases in a-C coatings. In the current study, it seems that fully arranged nanocomposite coatings contain hard C-D and NbC phases that embedded in the amorphous carbon (GLC) phase is formed. This unique structure decreased grain boundary density and defects and resulted in high hardness and H³/E² ratio. Moreover, the COF and wear rate of the coatings decreased with increasing Si/Nb ratio. This can be attributed to the good mechanical properties of the coatings and the formation of graphite-like carbon (GLC) structure with lamellae arrangement in the coatings. The complex and self-lubricant coatings are successfully formed on the surface of 321 alloys. The results of the present study clarified that Si addition to (Nb: a-C) coatings improve the mechanical and tribological performance of the coatings on 321 alloy.

Keywords: COF, mechanical properties, microstructure, (Nb: Si: a-C) coatings, Wear rate

Procedia PDF Downloads 59
390 R-Killer: An Email-Based Ransomware Protection Tool

Authors: B. Lokuketagoda, M. Weerakoon, U. Madushan, A. N. Senaratne, K. Y. Abeywardena

Abstract:

Ransomware has become a common threat in past few years and the recent threat reports show an increase of growth in Ransomware infections. Researchers have identified different variants of Ransomware families since 2015. Lack of knowledge of the user about the threat is a major concern. Ransomware detection methodologies are still growing through the industry. Email is the easiest method to send Ransomware to its victims. Uninformed users tend to click on links and attachments without much consideration assuming the emails are genuine. As a solution to this in this paper R-Killer Ransomware detection tool is introduced. Tool can be integrated with existing email services. The core detection Engine (CDE) discussed in the paper focuses on separating suspicious samples from emails and handling them until a decision is made regarding the suspicious mail. It has the capability of preventing execution of identified ransomware processes. On the other hand, Sandboxing and URL analyzing system has the capability of communication with public threat intelligence services to gather known threat intelligence. The R-Killer has its own mechanism developed in its Proactive Monitoring System (PMS) which can monitor the processes created by downloaded email attachments and identify potential Ransomware activities. R-killer is capable of gathering threat intelligence without exposing the user’s data to public threat intelligence services, hence protecting the confidentiality of user data.

Keywords: ransomware, deep learning, recurrent neural networks, email, core detection engine

Procedia PDF Downloads 183
389 Neuromyelitis Optica area Postrema Syndrome(NMOSD-APS) in a Fifteen-year-old Girl: A Case Report

Authors: Merilin Ivanova Ivanova, Kalin Dimitrov Atanasov, Stefan Petrov Enchev

Abstract:

Backgroud: Neuromyelitis optica spectrum disorder, also known as Devic’s disease, is a relapsing demyelinating autoimmune inflammatory disorder of the central nervous system associated with anti-aquaporin 4 (AQP4) antibodies that can manifest with devastating secondary neurological deficits. Most commonly affected are the optic nerves and the spinal cord-clinically this is often presented with optic neuritis (loss of vision), transverse myelitis(weakness or paralysis of extremities),lack of bladder and bowel control, numbness. APS is a core clinical entity of NMOSD and adds to the clinical representation the following symptoms: intractable nausea, vomiting and hiccup, it usually occurs isolated at onset, and can lead to a significant delay in the diagnosis. The condition may have features similar to multiple sclerosis (MS) but the episodes are worse in NMO and it is treated differently. It could be relapsing or monophasic. Possible complications are visual field defects and motor impairment, with potential blindness and irreversible motor deficits. In severe cases, myogenic respiratory failure ensues. The incidence of reported cases is approximately 0.3–4.4 per 100,000. Paediatric cases of NMOSD are rare but have been reported occasionally, comprising less than 5% of the reported cases. Objective: The case serves to show the difficulty when it comes to the diagnostic processes regarding a rare autoimmune disease with non- specific symptoms, taking large interval of rimes to reveal as complete clinical manifestation of the aforementioned syndrome, as well as the necessity of multidisciplinary approach in the setting of а general paediatric department in аn emergency hospital. Methods: itpatient's history, clinical presentation, and information from the used diagnostic tools(MRI with contrast of the central nervous system) lead us to the conclusion .This was later on confirmed by the positive results from the anti-aquaporin 4 (AQP4) antibody serology test. Conclusion: APS is a common symptom of NMOSD and is considered a challenge in a differential-diagnostic plan. Gaining an increased awareness of this disease/syndrome, obtaining a detailed patient history, and performing thorough physical examinations are essential if we are to reduce and avoid misdiagnosis.

Keywords: neuromyelitis, devic's disease, hiccup, autoimmune, MRI

Procedia PDF Downloads 22
388 Stimulus-Response and the Innateness Hypothesis: Childhood Language Acquisition of “Genie”

Authors: Caroline Kim

Abstract:

Scholars have long disputed the relationship between the origins of language and human behavior. Historically, behaviorist psychologist B. F. Skinner argued that language is one instance of the general stimulus-response phenomenon that characterizes the essence of human behavior. Another, more recent approach argues, by contrast, that language is an innate cognitive faculty and does not arise from behavior, which might develop and reinforce linguistic facility but is not its source. Pinker, among others, proposes that linguistic defects arise from damage to the brain, both congenital and acquired in life. Much of his argument is based on case studies in which damage to the Broca’s and Wernicke’s areas of the brain results in loss of the ability to produce coherent grammatical expressions when speaking or writing; though affected speakers often utter quite fluent streams of sentences, the words articulated lack discernible semantic content. Pinker concludes on this basis that language is an innate component of specific, classically language-correlated regions of the human brain. Taking a notorious 1970s case of linguistic maladaptation, this paper queries the dominant materialist paradigm of language-correlated regions. Susan “Genie” Wiley was physically isolated from language interaction in her home and beaten by her father when she attempted to make any sort of sound. Though without any measurable resulting damage to the brain, Wiley was never able to develop the level of linguistic facility normally achieved in adulthood. Having received a negative reinforcement of language acquisition from her father and lacking the usual language acquisition period, in adulthood Wiley was able to develop language only at a quite limited level in later life. From a contemporary behaviorist perspective, this case confirms the possibility of language deficiency without brain pathology. Wiley’s potential language-determining areas in the brain were intact, and she was exposed to language later in her life, but she was unable to achieve the normal level of communication skills, deterring socialization. This phenomenon and others like it in the case limited literature on linguistic maladaptation pose serious clinical, scientific, and indeed philosophical difficulties for both of the major competing theories of language acquisition, innateness, and linguistic stimulus-response. The implications of such cases for future research in language acquisition are explored, with a particular emphasis on the interaction of innate capacity and stimulus-based development in early childhood.

Keywords: behaviorism, innateness hypothesis, language, Susan "Genie" Wiley

Procedia PDF Downloads 268
387 An Efficient and Low Cost Protocol for Rapid and Mass in vitro Propagation of Hyssopus officinalis L.

Authors: Ira V. Stancheva, Ely G. Zayova, Maria P. Geneva, Marieta G. Hristozkova, Lyudmila I. Dimitrova, Maria I. Petrova

Abstract:

The study describes a highly efficient and low-cost protocol for rapid and mass in vitro propagation of medicinal and aromatic plant species (Hyssopus officinalis L., Lamiaceae). Hyssop is an important aromatic herb used for its medicinal values because of its antioxidant, anti-inflammatory and antimicrobial properties. The protocol for large-scale multiplication of this aromatic plant was developed using young stem tips explants. The explants were sterilized with 0.04% mercuric chloride (HgCl₂) solution for 20 minutes and washing three times with sterile distilled water in 15 minutes. The cultural media was full and half strength Murashige and Skoog medium containing indole-3-butyric acid. Full and ½ Murashige and Skoog media without auxin were used as controls. For each variant 20 glass tubes with two plants were used. In each tube two tip and nodal explants were inoculated. Maximum shoot and root number were obtained on ½ Murashige and Skoog medium supplemented with 0.1 mg L-1 indole-3-butyric acid at the same time after four weeks of culture. The number of shoots per explant and shoot height were considered. The data on rooting percentage, the number of roots per plant and root length were collected after the same cultural period. The highest percentage of survival 85% for this medicinal plant was recorded in mixture of soil, sand and perlite (2:1:1 v/v/v). This mixture was most suitable for acclimatization of all propagated plants. Ex vitro acclimatization was carried out at 24±1 °C and 70% relative humidity under 16 h illuminations (50 μmol m⁻²s⁻¹). After adaptation period, the all plants were transferred to the field. The plants flowered within three months after transplantation. Phenotypic variations in the acclimatized plants were not observed. An average of 90% of the acclimatized plants survived after transferring into the field. All the in vitro propagated plants displayed normal development under the field conditions. Developed in vitro techniques could provide a promising alternative tool for large-scale propagation that increases the number of homologous plants for field cultivation. Acknowledgments: This study was conducted with financial support from National Science Fund at the Bulgarian Ministry of Education and Science, Project DN06/7 17.12.16.

Keywords: Hyssopus officinalis L., in vitro culture, micro propagation, acclimatization

Procedia PDF Downloads 294
386 DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism

Authors: Kun Xu, Yuan Xu, Jia Qiao

Abstract:

The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD.

Keywords: document detection, corner detection, attention mechanism, lightweight

Procedia PDF Downloads 331
385 Preparation of hydrophobic silica membranes supported on alumina hollow fibers for pervaporation applications

Authors: Ami Okabe, Daisuke Gondo, Akira Ogawa, Yasuhisa Hasegawa, Koichi Sato, Sadao Araki, Hideki Yamamoto

Abstract:

Membrane separation draws attention as the energy-saving technology. Pervaporation (PV) uses hydrophobic ceramic membranes to separate organic compounds from industrial wastewaters. PV makes it possible to separate organic compounds from azeotropic mixtures and from aqueous solutions. For the PV separation of low concentrations of organics from aqueous solutions, hydrophobic ceramic membranes are expected to have high separation performance compared with that of conventional hydrophilic membranes. Membrane separation performance is evaluated based on the pervaporation separation index (PSI), which depends on both the separation factor and the permeate flux. Ingenuity is required to increase the PSI such that the permeate flux increases without reducing the separation factor or to increase the separation factor without reducing the flux. A thin separation layer without defects and pinholes is required. In addition, it is known that the flux can be increased without reducing the separation factor by reducing the diffusion resistance of the membrane support. In a previous study, we prepared hydrophobic silica membranes by a molecular templating sol−gel method using cetyltrimethylammonium bromide (CTAB) to form pores suitable for permitting the passage of organic compounds through the membrane. We separated low-concentration organics from aqueous solutions by PV using these membranes. In the present study, hydrophobic silica membranes were prepared on a porous alumina hollow fiber support that is thinner than the previously used alumina support. Ethyl acetate (EA) is used in large industrial quantities, so it was selected as the organic substance to be separated. Hydrophobic silica membranes were prepared by dip-coating porous alumina supports with a -alumina interlayer into a silica sol containing CTAB and vinyltrimethoxysilane (VTMS) as the silica precursor. Membrane thickness increases with the lifting speed of the sol in the dip-coating process. Different thicknesses of the γ-alumina layer were prepared by dip-coating the support into a boehmite sol at different lifting speeds (0.5, 1, 3, and 5 mm s-1). Silica layers were subsequently formed by dip-coating using an immersion time of 60 s and lifting speed of 1 mm s-1. PV measurements of the EA (5 wt.%)/water system were carried out using VTMS hydrophobic silica membranes prepared on -alumina layers of different thicknesses. Water and EA flux showed substantially constant value despite of the change of the lifting speed to form the γ-alumina interlayer. All prepared hydrophobic silica membranes showed the higher PSI compared with the hydrophobic membranes using the previous alumina support of hollow fiber.

Keywords: membrane separation, pervaporation, hydrophobic, silica

Procedia PDF Downloads 383
384 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 40
383 The Impact of a Prior Haemophilus influenzae Infection in the Incidence of Prostate Cancer

Authors: Maximiliano Guerra, Lexi Frankel, Amalia D. Ardeljan, Sarah Ghali, Diya Kohli, Omar M. Rashid.

Abstract:

Introduction/Background: Haemophilus influenzae is present as a commensal organism in the nasopharynx of most healthy adults from where it can spread to cause both systemic and respiratory tract infection. Pathogenic properties of this bacterium as well as defects in host defense may result in the spread of these bacteria throughout the body. This can result in a proinflammatory state and colonization particularly in the lungs. Recent studies have failed to determine a link between H. Influenzae colonization and prostate cancer, despite previous research demonstrating the presence of proinflammatory states in preneoplastic and neoplastic prostate lesions. Given these contradictory findings, the primary goal of this study was to evaluate the correlation between H. Influenzae infection and the incidence of prostate cancer. Methods: To evaluate the incidence of Haemophilus influenzae infection and the development of prostate cancer in the future we used data provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. We were afforded access to this database by Holy Cross Health, Fort Lauderdale for the express purpose of academic research. Standard statistical methods were employed in this study including Pearson’s chi-square tests. Results: Between January 2010 and December 2019, the query was analyzed and resulted in 13, 691 patients in both the control and C. difficile infected groups, respectively. The two groups were matched by age range and CCI score. In the Haemophilus influenzae infected group, the incidence of prostate cancer was 1.46%, while the incidence of the prostate cancer control group was 4.56%. The observed difference in cancer incidence was determined to be a statistically significant p-value (< 2.2x10^-16). This suggests that patients with a history of C. difficile have less risk of developing prostate cancer (OR 0.425, 95% CI: 0.382 - 0.472). Treatment bias was considered, the data was analyzed and resulted in two groups matched groups of 3,208 patients in both the infected with H. Influenzae treated group and the control who used the same medications for a different cause. Patients infected with H. Influenzae and treated had an incidence of prostate cancer of 2.49% whereas the control group incidence of prostate cancer was 4.92% with a p-value (< 2.2x10^-16) OR 0.455 CI 95% (0.526 -0.754), proving that the initial results were not due to the use of medications. Conclusion: The findings of our study reveal a statistically significant correlation between H. Influenzae infection and a decreased incidence of prostate cancer. Our findings suggest that prior infection with H. Influenzae may confer some degree of protection to patients and reduce their risk for developing prostate cancer. Future research is recommended to further characterize the potential role of Haemophilus influenzae in the pathogenesis of prostate cancer.

Keywords: Haemophilus Influenzae, incidence, prostate cancer, risk.

Procedia PDF Downloads 175
382 Prenatal Use of Serotonin Reuptake Inhibitors (SRIs) and Congenital Heart Anomalies (CHA): An Exploratory Pharmacogenetics Study

Authors: Aizati N. A. Daud, Jorieke E. H. Bergman, Wilhelmina S. Kerstjens-Frederikse, Pieter Van Der Vlies, Eelko Hak, Rolf M. F. Berger, Henk Groen, Bob Wilffert

Abstract:

Prenatal use of SRIs was previously associated with Congenital Heart Anomalies (CHA). The aim of the study is to explore whether pharmacogenetics plays a role in this teratogenicity using a gene-environment interaction study. A total of 33 case-mother dyads and 2 mother-only (children deceased) registered in EUROCAT Northern Netherlands were included in a case-only study. Five case-mother dyads and two mothers-only were exposed to SRIs (paroxetine=3, fluoxetine=2, venlafaxine=1, paroxetine and venlafaxine=1) in the first trimester of pregnancy. The remaining 28 case-mother dyads were not exposed to SRIs. Ten genes that encode the enzymes or proteins important in determining fetal exposure to SRIs or its mechanism of action were selected: CYPs (CYP1A2, CYP2C9, CYP2C19, CYP2D6), ABCB1 (placental P-glycoprotein), SLC6A4 (serotonin transporter) and serotonin receptor genes (HTR1A, HTR1B, HTR2A, and HTR3B). All included subjects were genotyped for 58 genetic variations in these ten genes. Logistic regression analyses were performed to determine the interaction odds ratio (OR) between genetic variations and SRIs exposure on the risk of CHA. Due to low phenotype frequencies of CYP450 poor metabolizers among exposed cases, the OR cannot be calculated. For ABCB1, there was no indication of changes in the risk of CHA with any of the ABCB1 SNPs in the children and their mothers. Several genetic variations of the serotonin transporter and receptors (SLC6A4 5-HTTLPR and 5-HTTVNTR, HTR1A rs1364043, HTR1B rs6296 & rs6298, HTR3B rs1176744) were associated with an increased risk of CHA, but with too limited sample size to reach statistical significance. For SLC6A4 genetic variations, the mean genetic scores of the exposed case-mothers tended to be higher than the unexposed mothers (2.5 ± 0.8 and 1.88 ± 0.7, respectively; p=0.061). For SNPs of the serotonin receptors, the mean genetic score for exposed cases (children) tended to be higher than the unexposed cases (3.4 ± 2.2, and 1.9 ± 1.6, respectively; p=0.065). This study might be among the first to explore the potential gene-environment interaction between pharmacogenetic determinants and SRIs use on the risk of CHA. With small sample sizes, it was not possible to find a significant interaction. However, there were indications for a role of serotonin receptor polymorphisms in fetuses exposed to SRIs on fetal risk of CHA which warrants further investigation.

Keywords: gene-environment interaction, heart defects, pharmacogenetics, serotonin reuptake inhibitors, teratogenicity

Procedia PDF Downloads 197
381 Health Trajectory Clustering Using Deep Belief Networks

Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour

Abstract:

We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.

Keywords: health trajectory, clustering, deep learning, DBN

Procedia PDF Downloads 345
380 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 224
379 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011

Authors: S. Abera, T. Gidey, W. Terefe

Abstract:

Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.

Keywords: data mining, HIV, testing, ethiopia

Procedia PDF Downloads 468
378 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 70
377 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia

Authors: Halefom Kidane

Abstract:

This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.

Keywords: artificial neural networks, forecasting, min-max normalization, wind speed

Procedia PDF Downloads 45
376 Effects of the Quality Construction of Public Construction in Taiwan to Implementation Three Levels Quality Management Institution

Authors: Hsin-Hung Lai, Wei Lo

Abstract:

Whether it is in virtue or vice for a construction quality of public construction project, it is one of the important indicators for national economic development and overall construction, the impact on the quality of national life is very deep. In recent years, a number of scandal of public construction project occurred, the requirements of the government agencies and the public require the quality of construction of public construction project are getting stricter than ever, the three-level public construction project construction quality of quality control system implemented by the government has a profound impact. This study mainly aggregated the evolution of ISO 9000 quality control system, the difference between the practice of implementing management of construction quality by many countries and three-level quality control of our country, so we explored and found that almost all projects of enhancing construction quality are dominated by civil organizations in foreign countries, whereas, it is induced by the national power in our country and develop our three-level quality control system and audit mechanism based on IOS system and implement the works by legislation, we also explored its enhancement and relevance with construction quality of public construction project that are intervened by such system and national power, and it really presents the effectiveness of construction quality been enhanced by the audited result. The three-level quality control system of our country to promote the policy of public construction project is almost same with the quality control system of many developed countries; however our country mainly implements such system on public construction project only, we promote the three-level quality control system is for enhancing the quality of public construction project, for establishing effective quality management system, so as to urge, correct and prevent the defects of quality management by manufacturers, whereas, those developed countries is comprehensively promoting (both public construction project and civil construction) such system. Therefore, this study is to explore the scope for public construction project only; the most important is the quality recognition by the executor, either good quality or deterioration is not a single event, there is a certain procedure extends from the demand and feasibility analysis, design, tendering, contracting, construction performance, inspection, continuous improvement, completion and acceptance, transferring and meeting the needs of the users, all of mentioned above have a causal relationship and it is a systemic problems. So the best construction quality would be manufactured and managed by reasonable cost if it is by extensive thinking and be preventive. We aggregated the implemented results in the past 10 years (2005 to 2015), the audited results of both in central units and local ones were slightly increased in A-grade while those listed in B-grade were decreased, although the levels were not evidently upgraded, yet, such result presents that the construction quality of concept of manufacturers are improving, and the construction quality has been established in the design stage, thus it is relatively beneficial to the enhancement of construction quality of overall public construction project.

Keywords: ISO 9000, three-level quality control system, audit and review mechanism for construction implementation, quality of construction implementation

Procedia PDF Downloads 315
375 Theoretical Study of Gas Adsorption in Zirconium Clusters

Authors: Rasha Al-Saedi, Anthony Meijer

Abstract:

The progress of new porous materials has increased rapidly over the past decade for use in applications such as catalysis, gas storage and removal of environmentally unfriendly species due to their high surface area and high thermal stability. In this work, a theoretical study of the zirconium-based metal organic framework (MOFs) were examined in order to determine their potential for gas adsorption of various guest molecules: CO2, N2, CH4 and H2. The zirconium cluster consists of an inner Zr6O4(OH)4 core in which the triangular faces of the Zr6- octahedron are alternatively capped by O and OH groups which bound to nine formate groups and three benzoate groups linkers. General formula is [Zr(μ-O)4(μ-OH)4(HCOO)9((phyO2C)3X))] where X= CH2OH, CH2NH2, CH2CONH2, n(NH2); (n = 1-3). Three types of adsorption sites on the Zr metal center have been studied, named according to capped chemical groups as the ‘−O site’; the H of (μ-OH) site removed and added to (μ-O) site, ‘–OH site’; (μ-OH) site removed, the ‘void site’ where H2O molecule removed; (μ-OH) from one site and H from other (μ-OH) site, in addition to no defect versions. A series of investigations have been performed aiming to address this important issue. First, density functional theory DFT-B3LYP method with 6-311G(d,p) basis set was employed using Gaussian 09 package in order to evaluate the gas adsorption performance of missing-linker defects in zirconium cluster. Next, study the gas adsorption behaviour on different functionalised zirconium clusters. Those functional groups as mentioned above include: amines, alcohol, amide, in comparison with non-substitution clusters. Then, dispersion-corrected density functional theory (DFT-D) calculations were performed to further understand the enhanced gas binding on zirconium clusters. Finally, study the water effect on CO2 and N2 adsorption. The small functionalized Zr clusters were found to result in good CO2 adsorption over N2, CH4, and H2 due to the quadrupole moment of CO2 while N2, CH4 and H2 weakly polar or non-polar. The adsorption efficiency was determined using the dispersion method where the adsorption binding improved as most of the interactions, for example, van der Waals interactions are missing with the conventional DFT method. The calculated gas binding strengths on the no defect site are higher than those on the −O site, −OH site and the void site, this difference is especially notable for CO2. It has been stated that the enhanced affinity of CO2 of no defect versions is most likely due to the electrostatic interactions between the negatively charged O of CO2 and the positively charged H of (μ-OH) metal site. The uptake of the gas molecule does not enhance in presence of water as the latter binds to Zr clusters more strongly than gas species which attributed to the competition on adsorption sites.

Keywords: density functional theory, gas adsorption, metal- organic frameworks, molecular simulation, porous materials, theoretical chemistry

Procedia PDF Downloads 161
374 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 35
373 Development of Fault Diagnosis Technology for Power System Based on Smart Meter

Authors: Chih-Chieh Yang, Chung-Neng Huang

Abstract:

In power system, how to improve the fault diagnosis technology of transmission line has always been the primary goal of power grid operators. In recent years, due to the rise of green energy, the addition of all kinds of distributed power also has an impact on the stability of the power system. Because the smart meters are with the function of data recording and bidirectional transmission, the adaptive Fuzzy Neural inference system, ANFIS, as well as the artificial intelligence that has the characteristics of learning and estimation in artificial intelligence. For transmission network, in order to avoid misjudgment of the fault type and location due to the input of these unstable power sources, combined with the above advantages of smart meter and ANFIS, a method for identifying fault types and location of faults is proposed in this study. In ANFIS training, the bus voltage and current information collected by smart meters can be trained through the ANFIS tool in MATLAB to generate fault codes to identify different types of faults and the location of faults. In addition, due to the uncertainty of distributed generation, a wind power system is added to the transmission network to verify the diagnosis correctness of the study. Simulation results show that the method proposed in this study can correctly identify the fault type and location of fault with more efficiency, and can deal with the interference caused by the addition of unstable power sources.

Keywords: ANFIS, fault diagnosis, power system, smart meter

Procedia PDF Downloads 120
372 Generalized Synchronization in Systems with a Complex Topology of Attractor

Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov

Abstract:

Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).

Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents

Procedia PDF Downloads 250
371 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 151
370 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads

Authors: Riaan Kleyn

Abstract:

Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.

Keywords: computer vision, wine grapes, machine learning, machine harvested grapes

Procedia PDF Downloads 64
369 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 123