Search results for: fused deep representations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2545

Search results for: fused deep representations

1945 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 53
1944 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media

Authors: Sharmila Parmanand

Abstract:

This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.

Keywords: discourse analysis, news media, sex work, trafficking

Procedia PDF Downloads 364
1943 Parkinson’s Disease Hand-Eye Coordination and Dexterity Evaluation System

Authors: Wann-Yun Shieh, Chin-Man Wang, Ya-Cheng Shieh

Abstract:

This study aims to develop an objective scoring system to evaluate hand-eye coordination and hand dexterity for Parkinson’s disease. This system contains three boards, and each of them is implemented with the sensors to sense a user’s finger operations. The operations include the peg test, the block test, and the blind block test. A user has to use the vision, hearing, and tactile abilities to finish these operations, and the board will record the results automatically. These results can help the physicians to evaluate a user’s reaction, coordination, dexterity function. The results will be collected to a cloud database for further analysis and statistics. A researcher can use this system to obtain systematic, graphic reports for an individual or a group of users. Particularly, a deep learning model is developed to learn the features of the data from different users. This model will help the physicians to assess the Parkinson’s disease symptoms by a more intellective algorithm.

Keywords: deep learning, hand-eye coordination, reaction, hand dexterity

Procedia PDF Downloads 45
1942 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 117
1941 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 114
1940 Media, Politics and Power in the Representation of the Refugee and Migration Crisis in Europe

Authors: Evangelia-Matroni Tomara

Abstract:

This thesis answers the question whether the media representations and reporting in 2015-2016 - especially, after the image of the drowned three-year-old Syrian boy in the Mediterranean Sea which made global headlines in the beginning of September 2015 -, the European Commission regulatory sources material and related reporting, have the power to challenge the conceptualization of humanitarianism or even redefine it. The theoretical foundations of the thesis are based on humanitarianism and its core definitions, the power of media representations and the relative portrayal of migrants, refugees and/or asylum seekers, as well as the dominant migration discourse and EU migration governance. Using content analysis for the media portrayal of migrants (436 newspaper articles) and qualitative content analysis for the European Commission Communication documents from May 2015 until June 2016 that required various depths of interpretation, this thesis allowed us to revise the concept of humanitarianism, realizing that the current crisis may seem to be a turning point for Europe but is not enough to overcome the past hostile media discourses and suppress the historical perspective of security and control-oriented EU migration policies. In particular, the crisis helped to shift the intensity of hostility and the persistence in the state-centric, border-oriented securitization in Europe into a narration of victimization rather than threat where mercy and charity dynamics are dominated and into operational mechanisms, noting the emergency of immediate management of the massive migrations flows, respectively. Although, the understanding of a rights-based response to the ongoing migration crisis, is being followed discursively in both political and media stage, the nexus described, points out that the binary between ‘us’ and ‘them’ still exists, with only difference that the ‘invaders’ are now ‘pathetic’ but still ‘invaders’. In this context, the migration crisis challenges the concept of humanitarianism because rights dignify migrants as individuals only in a discursive or secondary level while the humanitarian work is mostly related with the geopolitical and economic interests of the ‘savior’ states.

Keywords: European Union politics, humanitarianism, immigration, media representation, policy-making, refugees, security studies

Procedia PDF Downloads 272
1939 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 275
1938 Evaluation of Mutagenic and Antimutagenic Activities of Some Biological Active Benzoxazoles in the Ames Test

Authors: Özlem Arpacı, Zeliha Soysal, Nuran Diril

Abstract:

Benzoxazoles are heterocyclic compounds that have a fused benzene and an oxazole ring. These heterocyclic compounds are reported to have antibacterial, antitubercular, antifungal, antiviral, antioxidant and anticancer activities. In this study, some benzoxazole derivatives that have broad antimicrobial and potent antitumoral activities, are investigated their mutagenic activities with using the Ames Test. The Ames test was conducted using Salmonella typhimurium TA98, TA100 and TA102 tester strains in the standard plate incorporation assay in the absence of liver S9 fraction. The results are evaluated using SPSS and none of the benzoxazole derivatives showed mutagenic activity using the Ames test in the absence of S9 liver fraction.

Keywords: benzoxazoles, ames test, mutagenic activity, antimutagenic activity, antitumoral activity

Procedia PDF Downloads 314
1937 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 110
1936 Mechanical Properties of D2 Tool Steel Cryogenically Treated Using Controllable Cooling

Authors: A. Rabin, G. Mazor, I. Ladizhenski, R. Shneck, Z.

Abstract:

The hardness and hardenability of AISI D2 cold work tool steel with conventional quenching (CQ), deep cryogenic quenching (DCQ) and rapid deep cryogenic quenching heat treatments caused by temporary porous coating based on magnesium sulfate was investigated. Each of the cooling processes was examined from the perspective of the full process efficiency, heat flux in the austenite-martensite transformation range followed by characterization of the temporary porous layer made of magnesium sulfate using confocal laser scanning microscopy (CLSM), surface and core hardness and hardenability using Vickr’s hardness technique. The results show that the cooling rate (CR) at the austenite-martensite transformation range have a high influence on the hardness of the studied steel.

Keywords: AISI D2, controllable cooling, magnesium sulfate coating, rapid cryogenic heat treatment, temporary porous layer

Procedia PDF Downloads 116
1935 Integrable Heisenberg Ferromagnet Equations with Self-Consistent Potentials

Authors: Gulgassyl Nugmanova, Zhanat Zhunussova, Kuralay Yesmakhanova, Galya Mamyrbekova, Ratbay Myrzakulov

Abstract:

In this paper, we consider some integrable Heisenberg Ferromagnet Equations with self-consistent potentials. We study their Lax representations. In particular we derive their equivalent counterparts in the form of nonlinear Schr\"odinger type equations. We present the integrable reductions of the Heisenberg Ferromagnet Equations with self-consistent potentials. These integrable Heisenberg Ferromagnet Equations with self-consistent potentials describe nonlinear waves in ferromagnets with some additional physical fields.

Keywords: Heisenberg Ferromagnet equations, soliton equations, equivalence, Lax representation

Procedia PDF Downloads 436
1934 Mathematical Competence as It Is Defined through Learners' Errors in Arithmetic and Algebra

Authors: Michael Lousis

Abstract:

Mathematical competence is the great aim of every mathematical teaching and learning endeavour. This can be defined as an idealised conceptualisation of the quality of cognition and the ability of implementation in practice of the mathematical subject matter, which is included in the curriculum, and is displayed only through performance of doing mathematics. The present study gives a clear definition of mathematical competence in the domains of Arithmetic and Algebra that stems from the explanation of the learners’ errors in these domains. The learners, whose errors are explained, were Greek and English participants of a large, international, longitudinal, comparative research program entitled the Kassel Project. The participants’ errors emerged as results of their work in dealing with mathematical questions and problems of the tests, which were presented to them. The construction of the tests was such as only the outcomes of the participants’ work was to be encompassed and not their course of thinking, which resulted in these outcomes. The intention was that the tests had to provide undeviating comparable results and simultaneously avoid any probable bias. Any bias could stem from obtaining results by involving so many markers from different countries and cultures, with so many different belief systems concerning the assessment of learners’ course of thinking. In this way the validity of the research was protected. This fact forced the implementation of specific research methods and theoretical prospects to take place in order the participants’ erroneous way of thinking to be disclosed. These were Methodological Pragmatism, Symbolic Interactionism, Philosophy of Mind and the ideas of Computationalism, which were used for deciding and establishing the grounds of the adequacy and legitimacy of the obtained kinds of knowledge through the explanations given by the error analysis. The employment of this methodology and of these theoretical prospects resulted in the definition of the learners’ mathematical competence, which is the thesis of the present study. Thus, learners’ mathematical competence is depending upon three key elements that should be developed in their minds: appropriate representations, appropriate meaning, and appropriate developed schemata. This definition then determined the development of appropriate teaching practices and interventions conducive to the achievement and finally the entailment of mathematical competence.

Keywords: representations, meaning, appropriate developed schemata, computationalism, error analysis, explanations for the probable causes of the errors, Kassel Project, mathematical competence

Procedia PDF Downloads 248
1933 Lower Limb Oedema in Beckwith-Wiedemann Syndrome

Authors: Mihai-Ionut Firescu, Mark A. P. Carson

Abstract:

We present a case of inferior vena cava agenesis (IVCA) associated with bilateral deep venous thrombosis (DVT) in a patient with Beckwith-Wiedemann syndrome (BWS). In adult patients with BWS presenting with bilateral lower limb oedema, specific aetiological factors should be considered. These include cardiomyopathy and intraabdominal tumours. Congenital malformations of the IVC, through causing relative venous stasis, can lead to lower limb oedema either directly or indirectly by favouring lower limb venous thromboembolism; however, they are yet to be reported as an associated feature of BWS. Given its life-threatening potential, the prompt initiation of treatment for bilateral DVT is paramount. In BWS patients, however, this can prove more complicated. Due to overgrowth, the above-average birth weight can continue throughout childhood. In this case, the patient’s weight reached 170 kg, impacting on anticoagulation choice, as direct oral anticoagulants have a limited evidence base in patients with a body mass above 120 kg. Furthermore, the presence of IVCA leads to a long-term increased venous thrombosis risk. Therefore, patients with IVCA and bilateral DVT warrant specialist consideration and may benefit from multidisciplinary team management, with hematology and vascular surgery input. Conclusion: Here, we showcased a rare cause for bilateral lower limb oedema, respectively bilateral deep venous thrombosis complicating IVCA in a patient with Beckwith-Wiedemann syndrome. The importance of this case lies in its novelty, as the association between IVC agenesis and BWS has not yet been described. Furthermore, the treatment of DVT in such situations requires special consideration, taking into account the patient’s weight and the presence of a significant, predisposing vascular abnormality.

Keywords: Beckwith-Wiedemann syndrome, bilateral deep venous thrombosis, inferior vena cava agenesis, venous thromboembolism

Procedia PDF Downloads 211
1932 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 155
1931 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 105
1930 Design, Prototyping, Integration, Flight Testing of a 20 cm Span Fully Autonomous Fixed Wing Micro Air Vehicle

Authors: Vivek Paul, Abel Nelly, Shoeb A Adeel, R. Tilak, S. Maheshwaran, S. Pulikeshi, Roshan Antony, C. S. Suraj

Abstract:

This paper presents the complete design and development cycle of a 20 cm span fixed wing micro air vehicle that was developed at CSIR-NAL, under the micro air vehicle development program. The design is a cropped delta flying wing MAV with a modified N22 airfoil of 12.3% thickness. The design was fabricated using the fused deposition method- RPT technique. COTS components were procured and integrated into this RPT prototype. A commercial autopilot that was proven in the earlier MAV designs was used for this MAV. The MAV was flown fully autonomous for 14mins at an open field. The flight data showed good performance as expected from the MAV design. The paper also describes about the process involved in the design of MAVs.

Keywords: autopilot, autonomous mode, flight testing, MAV, RPT

Procedia PDF Downloads 502
1929 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing

Authors: John Eric C. Bargas, Ma. Cecilia M. Marcos

Abstract:

One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.

Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing

Procedia PDF Downloads 6
1928 Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification

Authors: Bharatendra Rai

Abstract:

The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning.

Keywords: long short-term memory networks, convolutional recurrent networks, text classification, hyperparameter tuning, Tukey honest significant differences

Procedia PDF Downloads 98
1927 Efficient Synthesis of Highly Functionalized Biologically Important Spirocarbocyclic Oxindoles via Hauser Annulation

Authors: Kanduru Lokesh, Venkitasamy Kesavan

Abstract:

The unique structural features of spiro-oxindoles with diverse biological activities have made them privileged structures in new drug discovery. The key structural characteristic of these compounds is the spiro ring fused at the C-3 position of the oxindole core with varied heterocyclic motifs. Structural diversification of heterocyclic scaffolds to synthesize new chemical entities as pharmaceuticals and agrochemicals is one of the important goals of synthetic organic chemists. Nitrogen and oxygen containing heterocycles are by far the most widely occurring privileged structures in medicinal chemistry. The structural complexity and distinct three-dimensional arrangement of functional groups of these privileged structures are generally responsible for their specificity against biological targets. Structurally diverse compound libraries have proved to be valuable assets for drug discovery against challenging biological targets. Thus, identifying a new combination of substituents at C-3 position on oxindole moiety is of great importance in drug discovery to improve the efficiency and efficacy of the drugs. The development of suitable methodology for the synthesis of spiro-oxindole compounds has attracted much interest often in response to the significant biological activity displayed by the both natural and synthetic compounds. So creating structural diversity of oxindole scaffolds is need of the decade and formidable challenge. A general way to improve synthetic efficiency and also to access diversified molecules is through the annulation reactions. Annulation reactions allow the formation of complex compounds starting from simple substrates in a single transformation consisting of several steps in an ecologically and economically favorable way. These observations motivated us to develop the annulation reaction protocol to enable the synthesis of a new class of spiro-oxindole motifs which in turn would enable the enhancement of molecular diversity. As part of our enduring interest in the development of novel, efficient synthetic strategies to enable the synthesis of biologically important oxindole fused spirocarbocyclic systems, We have developed an efficient methodology for the construction of highly functionalized spirocarbocyclic oxindoles through [4+2] annulation of phthalides via Hauser annulation. functionalized spirocarbocyclic oxindoles was accomplished for the first time in the literature using Hauser annulation strategy. The reaction between methyleneindolinones and arylsulfonylphthalides catalyzed by cesium carbonate led to the access of new class of biologically important spiro[indoline-3,2'-naphthalene] derivatives in very good yields. The synthetic utility of the annulated product was further demonstrated by fluorination Using NFSI as a fluorinating agent to furnish corresponding fluorinated product.

Keywords: Hauser-Kraus annulation, spiro carbocyclic oxindoles, oxindole-ester, fluoridation

Procedia PDF Downloads 182
1926 Colorectal Resection in Endometriosis: A Study on Conservative Vascular Approach

Authors: A. Zecchin, E. Vallicella, I. Alberi, A. Dalle Carbonare, A. Festi, F. Galeone, S. Garzon, R. Raffaelli, P. Pomini, M. Franchi

Abstract:

Introduction: Severe endometriosis is a multiorgan disease, that involves bowel in 31% of cases. Disabling symptoms and deep infiltration can lead to bowel obstruction: surgical bowel treatment may be needed. In these cases, colorectal segment resection is usually performed by inferior mesenteric artery ligature, as radically as for oncological surgery. This study was made on surgery based on intestinal vascular axis’ preservation. It was assessed postoperative complications risks (mainly rate of dehiscence of intestinal anastomoses), and results were compared with the ones found in literature about classical colorectal resection. Materials and methods: This was a retrospective study based on 62 patients with deep infiltrating endometriosis of the bowel, which undergo segmental resection with intestinal vascular axis preservation, between 2013 and 2016. It was assessed complications related to the intervention both during hospitalization and 30-60 days after resection. Particular attention was paid to the presence of anastomotic dehiscence. 52 patients were finally telephonically interviewed in order to investigate the presence or absence of intestinal constipation. Results and Conclusion: Segmental intestinal resection performed in this study ensured a more conservative vascular approach, with lower rate of anastomotic dehiscence (1.6%) compared to classical literature data (10.0% to 11.4% ). No complications were observed regarding spontaneous recovery of intestinal motility and bladder emptying. Constipation in some patients, even after years of intervention, is not assessable in the absence of a preoperative constipation state assessment.

Keywords: anastomotic dehiscence, deep infiltrating endometriosis, colorectal resection, vascular axis preservation

Procedia PDF Downloads 180
1925 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 123
1924 Some Results on Generalized Janowski Type Functions

Authors: Fuad Al Sarari

Abstract:

The purpose of the present paper is to study subclasses of analytic functions which generalize the classes of Janowski functions introduced by Polatoglu. We study certain convolution conditions. This leads to a study of the sufficient condition and the neighborhood results related to the functions in the class S (T; H; F ): and a study of new subclasses of analytic functions that are defined using notions of the generalized Janowski classes and -symmetrical functions. In the quotient of analytical representations of starlikeness and convexity with respect to symmetric points, certain inherent properties are pointed out.

Keywords: convolution conditions, subordination, Janowski functions, starlike functions, convex functions

Procedia PDF Downloads 50
1923 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction

Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand

Abstract:

Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.

Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids

Procedia PDF Downloads 147
1922 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 107
1921 Learning Physics Concepts through Language Syntagmatic Paradigmatic Relations

Authors: C. E. Laburu, M. A. Barros, A. F. Zompero, O. H. M. Silva

Abstract:

The work presents a teaching strategy that employs syntagmatic and paradigmatic linguistic relations in order to monitor the understanding of physics students’ concepts. Syntagmatic and paradigmatic relations are theoretical elements of semiotics studies and our research circumstances and justified them within the research program of multi-modal representations. Among the multi-modal representations to learning scientific knowledge, the scope of action of syntagmatic and paradigmatic relations belongs to the discursive writing form. The use of such relations has the purpose to seek innovate didactic work with discourse representation in the write form before translate to another different representational form. The research was conducted with a sample of first year high school students. The students were asked to produce syntagmatic and paradigmatic of Newton’ first law statement. This statement was delivered in paper for each student that should individually write the relations. The student’s records were collected for analysis. It was possible observed in one student used here as example that their monemes replaced and rearrangements produced by, respectively, syntagmatic and paradigmatic relations, kept the original meaning of the law. In paradigmatic production he specified relevant significant units of the linguistic signs, the monemas, which constitute the first articulation and each word substituted kept equivalence to the original meaning of original monema. Also, it was noted a number of diverse and many monemas were chosen, with balanced combination of grammatical (grammatical monema is what changes the meaning of a word, in certain positions of the syntagma, along with a relatively small number of other monemes. It is the smallest linguistic unit that has grammatical meaning) and lexical (lexical monema is what belongs to unlimited inventories; is the monema endowed with lexical meaning) monemas. In syntagmatic production, monemas ordinations were syntactically coherent, being linked with semantic conservation and preserved number. In general, the results showed that the written representation mode based on linguistic relations paradigmatic and syntagmatic qualifies itself to be used in the classroom as a potential identifier and accompanist of meanings acquired from students in the process of scientific inquiry.

Keywords: semiotics, language, high school, physics teaching

Procedia PDF Downloads 114
1920 When Pain Becomes Love For God: The Non-Object Self

Authors: Roni Naor-Hofri

Abstract:

This paper shows how self-inflicted pain enabled the expression of love for God among Christian monastic ascetics in medieval central Europe. As scholars have shown, being in a state of pain leads to a change in or destruction of language, an essential feature of the self. The author argues that this transformation allows the self to transcend its boundaries as an object, even if only temporarily and in part. The epistemic achievement of love for God, a non-object, would not otherwise have been possible. To substantiate her argument, the author shows that the self’s transformation into a non-object enables the imitation of God: not solely in the sense of imitatio Christi, of physical and visual representations of God incarnate in the flesh of His son Christ, but also in the sense of the self’s experience of being a non-object, just like God, the target of the self’s love.

Keywords: love for God , pain, philosophy, religion

Procedia PDF Downloads 230
1919 The Presence of Carnism on Portuguese Television

Authors: Rui Pedro Fonseca

Abstract:

This paper presents the results of a research about carnism on Portuguese television. It begins by presenting a case study of MasterChef program (TVI) which conveys carnism in both practices and language, and from which some characteristics of their dominant representations are described. Subsequently, the paper presents the indicators of the presence of carnism in the Portuguese television programming, between 2013 and 2014, in the TVI, RTP1, and SICS channels. The data reveals that there is the hegemony of the carnist ideology in the main channels of the Portuguese television. Also, the samples collected and viewed show no mention of the impacts of carnism in its various dimensions (non-human animals, environment, human health and sustainability).

Keywords: carnism, speciesism, television, Portugal

Procedia PDF Downloads 330
1918 Highlighting Strategies Implemented by Migrant Parents to Support Their Child's Educational and Academic Success in the Host Society

Authors: Josee Charette

Abstract:

The academic and educational success of migrant students is a current issue in education, especially in western societies such in the province of Quebec, in Canada. For people who immigrate with school-age children, the success of the family’s migratory project is often measured by the benefits drawn by children from the educational institutions of their host society. In order to support the academic achievement of their children, migrant parents try to develop practices that derive from their representations of school and related challenges inspired by the socio-cultural context of their country of origin. These findings lead us to the following question: How does strategies implemented by migrant parents to manage the representational distance between school of their country of origin and school of their host society support or not the academic and educational success of their child? In the context of a qualitative exploratory approach, we have made interviews in the French , English and Spanish languages with 32 newly immigrated parents and 10 of their children. Parents were invited to complete a network of free associations about «School in Quebec» as a premise for the interview. The objective of this paper is to present strategies implemented by migrant parents to manage the distance between their representations of schools in their country of origin and in the host society, and to explore the influence of this management on their child’s academic and educational trajectories. Data analysis led us to develop various types of strategies, such as continuity, adaptation, resources mobilization, compensation and "return to basics" strategies. These strategies seem to be part of a continuum from oppositional-conflict scenario, in which parental strategies act as a risk factor, to conciliator-integrator scenario, in which parental strategies act as a protective factor for migrant students’ academic and educational success. In conclusion, we believe that our research helps in highlighting strategies implemented by migrant parents to support their child’s academic and educational success in the host society and also helps in providing a more efficient support to migrant parents and contributes to develop a wider portrait of migrant students’ academic achievement.

Keywords: academic and educational achievement of immigrant students, family’s migratory project, immigrants parental strategies, representational distance between school of origin and school of host society

Procedia PDF Downloads 430
1917 DocPro: A Framework for Processing Semantic and Layout Information in Business Documents

Authors: Ming-Jen Huang, Chun-Fang Huang, Chiching Wei

Abstract:

With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.

Keywords: document processing, framework, formal definition, machine learning

Procedia PDF Downloads 191
1916 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 294