Search results for: automatic impedance matching
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1806

Search results for: automatic impedance matching

396 Influence of Crystal Orientation on Electromechanical Behaviors of Relaxor Ferroelectric P(VDF-TRFE-CTFE) Terpolymer

Authors: Qing Liu, Jean-fabien Capsal, Claude Richard

Abstract:

In this current contribution, authors are dedicated to investigate influence of the crystal lamellae orientation on electromechanical behaviors of relaxor ferroelectric Poly (vinylidene fluoride –trifluoroethylene -chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) films by control of polymer microstructure, aiming to picture the full map of structure-property relationship. In order to define their crystal orientation films, terpolymer films were fabricated by solution-casting, stretching and hot-pressing process. Differential scanning calorimetry, impedance analyzer, and tensile strength techniques were employed to characterize crystallographic parameters, dielectric permittivity, and elastic Young’s modulus respectively. In addition, large electrical induced out-of-plane electrostrictive strain was obtained by cantilever beam mode. Consequently, as-casted pristine films exhibited surprisingly high electrostrictive strain 0.1774% due to considerably small value of elastic Young’s modulus although relatively low dielectric permittivity. Such reasons contributed to large mechanical elastic energy density. Instead, due to 2 folds increase of elastic Young’s modulus and less than 50% augmentation of dielectric constant, fully-crystallized film showed weak electrostrictive behavior and mechanical energy density as well. And subjected to mechanical stretching process, Film C exhibited stronger dielectric constant and out-performed electrostrictive strain over Film B because edge-on crystal lamellae orientation induced by uniaxially mechanical stretch. Hot-press films were compared in term of cooling rate. Rather large electrostrictive strain of 0.2788% for hot-pressed Film D in quenching process was observed although its dielectric permittivity equivalent to that of pristine as-casted Film A, showing highest mechanical elastic energy density value of 359.5 J/m^3. In hot-press cooling process, dielectric permittivity of Film E saw values at 48.8 concomitant with ca.100% increase of Young’s modulus. Films with intermediate mechanical energy density were obtained.

Keywords: crystal orientation, electrostroctive strain, mechanical energy density, permittivity, relaxor ferroelectric

Procedia PDF Downloads 371
395 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 114
394 Association of the Frequency of the Dairy Products Consumption by Students and Health Parameters

Authors: Radyah Ivan, Khanferyan Roman

Abstract:

Milk and dairy products are an important component of a balanced diet. Dairy products represent a heterogeneous food group of solid, semi-solid and liquid, fermented or non-fermented foods, each differing in nutrients such as fat and micronutrient content. Deficiency of milk and dairy products contributes a impact on the main health parameters of the various age groups of the population. The goal of this study was to analyze of the frequency of the consumption of milk and various groups of dairy products by students and its association with their body mass index (BMI), body composition and other physiological parameters. 388 full-time students of the Medical Institute of RUDN University (185 male and 203 female, average age was 20.4+2.2 and 21.9+1.7 y.o., respectively) took part in the cross-sectional study. Anthropometric measurements, estimation of BMI and body composition were analyzed by bioelectrical impedance analysis. The frequency of consumption of the milk and various groups of dairy products was studied using a modified questionnaire on the frequency of consumption of products. Due to the questionnaire data on the frequency of consumption of the diary products, it have been demonstrated that only 11% of respondents consume milk daily, 5% - cottage cheese, 4% and 1% - fermented natural and with fillers milk products, respectively, hard cheese -4%. The study demonstrated that about 16% of the respondents did not consume milk at all over the past month, about one third - cottage cheese, 22% - natural sour-milk products and 18% - sour-milk products with various fillers. hard cheeses and pickled cheeses didn’t consume 9% and 26% of respondents, respectively. We demonstrated the gender differences in the characteristics of consumer preferences were revealed. Thus female students are less likely to use cream, sour cream, soft cheese, milk comparing to male students. Among female students the prevalence of persons with overweight was higher (25%) than among male students (19%). A modest inverse relationship was demonstrated between daily milk intake, BMI, body composition parameters and diary products consumption (r=-0.61 and r=-0.65). The study showed daily insufficient milk and dairy products consumption by students and due to this it have been demonstrated the relationship between the low and rare consumption of diary products and main parameters of indicators of physical activity and health indicators.

Keywords: frequency of consumption, milk, dairy products, physical development, nutrition, body mass index.

Procedia PDF Downloads 32
393 Classifier for Liver Ultrasound Images

Authors: Soumya Sajjan

Abstract:

Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.

Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix

Procedia PDF Downloads 404
392 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations

Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad

Abstract:

The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.

Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1

Procedia PDF Downloads 84
391 A New Obesity Index Derived from Waist Circumference and Hip Circumference Well-Matched with Other Indices in Children with Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Anthropometric obesity indices such as waist circumference (WC), indices derived from anthropometric measurements such as waist-to-hip ratio (WHR), and indices created from body fat mass composition such as trunk-to-leg fat ratio (TLFR) are commonly used for the evaluation of mild or severe forms of obesity. Their clinical utilities are being compared using body mass index (BMI) percentiles to classify obesity groups. The best of them is still being investigated to make a clear-cut discrimination between healthy normal individuals (N-BMI) and overweight or obese (OB) or morbid obese patients. The aim of this study is to derive a new index, which best suits the purpose for the discrimination of children with N-BMI from OB children. A total of eighty-three children participated in the study. Two groups were constituted. The first group comprised 42 children with N-BMI, and the second group was composed of 41 OB children, whose age- and sex- adjusted BMI percentile values vary between 95 and 99. The corresponding values for the first group were between 15 and 85. This classification was based upon the tables created by World Health Organization. The institutional ethics committee approved the study protocol. Informed consent forms were filled by the parents of the participants. Anthropometric measurements were taken and recorded following a detailed physical examination. Within this context, weight, height (Ht), WC, hip C (HC), neck C (NC) values were taken. Body mass index, WHR, (WC+HC)/2, WC/Ht, (WC/HC)/Ht, WC*NC were calculated. Bioelectrical impedance analysis was performed to obtain body’s fat compartments in terms of total fat, trunk fat, leg fat, arm fat masses. Trunk-to-leg fat ratio, trunk-to-appendicular fat ratio (TAFR), (trunk fat+leg fat)/2 ((TF+LF)/2) were calculated. Fat mass index (FMI) and diagnostic obesity notation model assessment-II (D2I) index values were calculated. Statistical analysis of the data was performed. Significantly increased values of (WC+HC)/2, (TF+LF)/2, D2I, and FMI were observed in OB group in comparison with those of N-BMI group. Significant correlations were calculated between BMI and WC, (WC+HC)/2, (TF+LF)/2, TLFR, TAFR, D2I as well as FMI both in N-BMI and OB groups. The same correlations were obtained for WC. (WC+HC)/2 was correlated with TLFR, TAFR, (TF+LF)/2, D2I, and FMI in N-BMI group. In OB group, the correlations were the same except those with TLFR and TAFR. These correlations were not present with WHR. Correlations were observed between TLFR and BMI, WC, (WC+HC)/2, (TF+LF)/2, D2I as well as FMI in N-BMI group. Same correlations were observed also with TAFR. In OB group, correlations between TLFR or TAFR and BMI, WC as well as (WC+HC)/2 were missing. None was noted with WHR. From these findings, it was concluded that (WC+HC)/2, but not WHR, was much more suitable as an anthropometric obesity index. The only correlation valid in both groups was that exists between (WC+HC)/2 and (TF+LF)/2. This index was suggested as a link between anthropometric and fat-based indices.

Keywords: children, hip circumference, obesity, waist circumference

Procedia PDF Downloads 166
390 Mild Hypothermia Versus Normothermia in Patients Undergoing Cardiac Surgery: A Propensity Matched Analysis

Authors: Ramanish Ravishankar, Azar Hussain, Mahmoud Loubani, Mubarak Chaudhry

Abstract:

Background and Aims: Currently, there are no strict guidelines in cardiopulmonary bypass temperature management in cardiac surgery not involving the aortic arch. This study aims to compare patient outcomes undergoing mild hypothermia and normothermia. The aim of this study was to compare patient outcomes between mild hypothermia and normothermia undergoing on-pump cardiac surgery not involving the aortic arch. Methods: This was a retrospective cohort study from January 2015 until May 2023. Patients who underwent cardiac surgery with cardiopulmonary bypass temperatures ≥32oC were included and stratified into mild hypothermia (32oC – 35oC) and normothermia (>35oC) cohorts. Propensity matching was applied through the nearest neighbour method (1:1) using the risk factors detailed in the EuroScore using RStudio. The primary outcome was mortality. Secondary outcomes included post-op stay, intensive care unit readmission, re-admission, stroke, and renal complications. Patients who had major aortic surgery and off-pump operations were excluded. Results: Each cohort had 1675 patients. There was a significant increase in overall mortality with the mild hypothermia cohort (3.59% vs. 2.32%; p=0.04912). There was also a greater stroke incidence (2.09% vs. 1.13%; p=0.0396) and transient ischaemic attack (TIA) risk (3.1% vs. 1.49%; p=0.0027). There was no significant difference in renal complications (9.13% vs. 7.88%; p=0.2155). Conclusions: Patient’s who underwent mild hypothermia during cardiopulmonary bypass have a significantly greater mortality, stroke, and transient ischaemic attack incidence. Mild hypothermia does not appear to provide any benefit over normothermia and does not appear to provide any neuroprotective benefits. This shows different results to that of other major studies; further trials and studies need to be conducted to reach a consensus.

Keywords: cardiac surgery, therapeutic hypothermia, neuroprotection, cardiopulmonary bypass

Procedia PDF Downloads 66
389 iCount: An Automated Swine Detection and Production Monitoring System Based on Sobel Filter and Ellipse Fitting Model

Authors: Jocelyn B. Barbosa, Angeli L. Magbaril, Mariel T. Sabanal, John Paul T. Galario, Mikka P. Baldovino

Abstract:

The use of technology has become ubiquitous in different areas of business today. With the advent of digital imaging and database technology, business owners have been motivated to integrate technology to their business operation ranging from small, medium to large enterprises. Technology has been found to have brought many benefits that can make a business grow. Hog or swine raising, for example, is a very popular enterprise in the Philippines, whose challenges in production monitoring can be addressed through technology integration. Swine production monitoring can become a tedious task as the enterprise goes larger. Specifically, problems like delayed and inconsistent reports are most likely to happen if counting of swine per pen of which building is done manually. In this study, we present iCount, which aims to ensure efficient swine detection and counting that hastens the swine production monitoring task. We develop a system that automatically detects and counts swine based on Sobel filter and ellipse fitting model, given the still photos of the group of swine captured in a pen. We improve the Sobel filter detection result through 8-neigbhorhood rule implementation. Ellipse fitting technique is then employed for proper swine detection. Furthermore, the system can generate periodic production reports and can identify the specific consumables to be served to the swine according to schedules. Experiments reveal that our algorithm provides an efficient way for detecting swine, thereby providing a significant amount of accuracy in production monitoring.

Keywords: automatic swine counting, swine detection, swine production monitoring, ellipse fitting model, sobel filter

Procedia PDF Downloads 310
388 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy

Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi

Abstract:

The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.

Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance

Procedia PDF Downloads 173
387 The Efficacy of Pre-Hospital Packed Red Blood Cells in the Treatment of Severe Trauma: A Retrospective, Matched, Cohort Study

Authors: Ryan Adams

Abstract:

Introduction: Major trauma is the leading cause of death in 15-45 year olds and a significant human, social and economic costs. Resuscitation is a stalwart of trauma management, especially in the pre-hospital environment and packed red blood cells (pRBC) are being increasingly used with the advent of permissive hypotension. The evidence in this area is lacking and further research is required to determine its efficacy. Aim: The aim of this retrospective, matched cohort study was to determine if major trauma patients, who received pre-hospital pRBC, have a difference in their initial emergency department cardiovascular status; when compared with injury-profile matched controls. Methods: The trauma databases of the Royal Brisbane and Women's Hospital, Royal Children's Hospital (Herston) and Queensland Ambulance Service were accessed and major trauma patient (ISS>12) data, who received pre-hospital pRBC, from January 2011 to August 2014 was collected. Patients were then matched against control patients that had not received pRBC, by their injury profile. The primary outcomes was cardiovascular status; defined as shock index and Revised Trauma Score. Results: Data for 25 patients who received pre-hospital pRBC was accessed and the injury profiles matched against suitable controls. On admittance to the emergency department, a statistically significant difference was seen in the blood group (Blood = 1.42 and Control = 0.97, p-value = 0.0449). However, the same was not seen with the RTS (Blood = 4.15 and Control 5.56, p-value = 0.291). Discussion: A worsening shock index and revised trauma score was associated with pre-hospital administration of pRBC. However, due to the small sample size, limited matching protocol and associated confounding factors it is difficult to draw any solid conclusions. Further studies, with larger patient numbers, are required to enable adequate conclusions to be drawn on the efficacy of pre-hospital packed red blood cell transfusion.

Keywords: pre-hospital, packed red blood cells, severe trauma, emergency medicine

Procedia PDF Downloads 391
386 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: Aisultan Shoiynbek, Darkhan Kuanyshbay, Paulo Menezes, Akbayan Bekarystankyzy, Assylbek Mukhametzhanov, Temirlan Shoiynbek

Abstract:

Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 19
385 Regeneration Nature of Rumex Species Root Fragment as Affected by Desiccation

Authors: Khalid Alshallash

Abstract:

Small fragments of the roots of some Rumex species including R. obtusifolius and R. crispus have been found to regenerate readily, contributing to the severity of infestations by these very common, widespread and difficult to control perennial weeds of agricultural crops and grasslands. Their root fragments are usually created during routine agricultural practices. We found that fresh root fragments of both species containing 65-70 % of moisture, progressively lose their moisture content when desiccated under controlled growth room conditions matching summer weather of southeast England, with the greatest reduction occurring in the first 48 hours. Probability of shoot emergence and the time taken for emergence in glasshouse conditions were also reduced significantly by desiccation, with R. obtusifolius least affected up to 48-hour. However, the effects converged after 120 hours. In contrast, R. obtusifolius was significantly slower to emerge after up to 48 hours desiccation, again effects converging after longer periods, R. crispus entirely failed to emerge at 120 hours. The dry weight of emerged shoots was not significantly different between the species, until desiccated for 96 hours when R. obtusifolius was significantly reduced. At 120 hours, R. obtusifolius did not emerge. In outdoor trials, desiccation for 24 or 48 hours had less effect on emergence when planted at the soil surface or up to 10 cm of depth, compared to deeper plantings. In both species, emergence was significantly lower when desiccated fragments were planted at 15 or 20 cm. Time taken for emergence was not significantly different between the species until planted at 15 or 20 cm when R. obtusifolius was slower than R. crispus and reduced further by increasing desiccation. Similar variation in effects of increasing soil depth interacting with increasing desiccation was found in reductions in dry weight, the number of tillers and leaf area, with R obtusifolius generally but not exclusively better able to withstand more extreme trial conditions. Our findings suggest that infestations of these highly troublesome weeds may be partly controlled by appropriate agricultural practices, notably exposing cut fragments to drying environmental conditions followed by deep burial.

Keywords: regeneration, root fragment, rumex crispus, rumex obtusifolius

Procedia PDF Downloads 95
384 Real-World Comparison of Adherence to and Persistence with Dulaglutide and Liraglutide in UAE e-Claims Database

Authors: Ibrahim Turfanda, Soniya Rai, Karan Vadher

Abstract:

Objectives— The study aims to compare real-world adherence to and persistence with dulaglutide and liraglutide in patients with type 2 diabetes (T2D) initiating treatment in UAE. Methods— This was a retrospective, non-interventional study (observation period: 01 March 2017–31 August 2019) using the UAE Dubai e-Claims database. Included: adult patients initiating dulaglutide/liraglutide 01 September 2017–31 August 2018 (index period) with: ≥1 claim for T2D in the 6 months before index date (ID); ≥1 claim for dulaglutide/liraglutide during index period; and continuous medical enrolment for ≥6 months before and ≥12 months after ID. Key endpoints, assessed 3/6/12 months after ID: adherence to treatment (proportion of days covered [PDC; PDC ≥80% considered ‘adherent’], per-group mean±standard deviation [SD] PDC); and persistence (number of continuous therapy days from ID until discontinuation [i.e., >45 days gap] or end of observation period). Patients initiating dulaglutide/liraglutide were propensity score matched (1:1) based on baseline characteristics. Between-group comparison of adherence was analysed using the McNemar test (α=0.025). Persistence was analysed using Kaplan–Meier estimates with log-rank tests (α=0.025) for between-group comparisons. This study presents 12-month outcomes. Results— Following propensity score matching, 263 patients were included in each group. Mean±SD PDC for all patients at 12 months was significantly higher in the dulaglutide versus the liraglutide group (dulaglutide=0.48±0.30, liraglutide=0.39±0.28, p=0.0002). The proportion of adherent patients favored dulaglutide (dulaglutide=20.2%, liraglutide=12.9%, p=0.0302), as did the probability of being adherent to treatment (odds ratio [97.5% CI]: 1.70 [0.99, 2.91]; p=0.03). Proportion of persistent patients also favoured dulaglutide (dulaglutide=15.2%, liraglutide=9.1%, p=0.0528), as did the probability of discontinuing treatment 12 months after ID (p=0.027). Conclusions— Based on the UAE Dubai e-Claims database data, dulaglutide initiators exhibited significantly greater adherence in terms of mean PDC versus liraglutide initiators. The proportion of adherent patients and the probability of being adherent favored the dulaglutide group, as did treatment persistence.

Keywords: adherence, dulaglutide, effectiveness, liraglutide, persistence

Procedia PDF Downloads 120
383 Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software

Authors: Marian Sagat, Mariana Remesikova

Abstract:

In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros.

Keywords: geodesic, geodesic curvature flow, mesh, Rhinoceros software

Procedia PDF Downloads 143
382 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: airborne laser scanning, digital terrain models, filtering, forested areas

Procedia PDF Downloads 135
381 Liver and Liver Lesion Segmentation From Abdominal CT Scans

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The interpretation of medical images benefits from anatomical and physiological priors to optimize computer- aided diagnosis applications. Segmentation of liver and liver lesion is regarded as a major primary step in computer aided diagnosis of liver diseases. Precise liver segmentation in abdominal CT images is one of the most important steps for the computer-aided diagnosis of liver pathology. In this papers, a semi- automated method for medical image data is presented for the liver and liver lesion segmentation data using mathematical morphology. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological filters to extract the liver. The second step consists to detect the liver lesion. In this task; we proposed a new method developed for the semi-automatic segmentation of the liver and hepatic lesions. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to improve the quality of the original image and image gradient by applying the spatial filter followed by the morphological filters. The second step consists to calculate the internal and external markers of the liver and hepatic lesions. Thereafter we proceed to the liver and hepatic lesions segmentation by the watershed transform controlled by markers. The validation of the developed algorithm is done using several images. Obtained results show the good performances of our proposed algorithm

Keywords: anisotropic diffusion filter, CT images, hepatic lesion segmentation, Liver segmentation, morphological filter, the watershed algorithm

Procedia PDF Downloads 445
380 Yawning Computing Using Bayesian Networks

Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube

Abstract:

Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.

Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms

Procedia PDF Downloads 453
379 HLB Disease Detection in Omani Lime Trees using Hyperspectral Imaging Based Techniques

Authors: Jacintha Menezes, Ramalingam Dharmalingam, Palaiahnakote Shivakumara

Abstract:

In the recent years, Omani acid lime cultivation and production has been affected by Citrus greening or Huanglongbing (HLB) disease. HLB disease is one of the most destructive diseases for citrus, with no remedies or countermeasures to stop the disease. Currently used Polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) HLB detection tests require lengthy and labor-intensive laboratory procedures. Furthermore, the equipment and staff needed to carry out the laboratory procedures are frequently specialized hence making them a less optimal solution for the detection of the disease. The current research uses hyperspectral imaging technology for automatic detection of citrus trees with HLB disease. Omani citrus tree leaf images were captured through portable Specim IQ hyperspectral camera. The research considered healthy, nutrition deficient, and HLB infected leaf samples based on the Polymerase chain reaction (PCR) test. The highresolution image samples were sliced to into sub cubes. The sub cubes were further processed to obtain RGB images with spatial features. Similarly, RGB spectral slices were obtained through a moving window on the wavelength. The resized spectral-Spatial RGB images were given to Convolution Neural Networks for deep features extraction. The current research was able to classify a given sample to the appropriate class with 92.86% accuracy indicating the effectiveness of the proposed techniques. The significant bands with a difference in three types of leaves are found to be 560nm, 678nm, 726 nm and 750nm.

Keywords: huanglongbing (HLB), hyperspectral imaging (HSI), · omani citrus, CNN

Procedia PDF Downloads 73
378 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization

Procedia PDF Downloads 128
377 Voltage and Frequency Regulation Using the Third-Party Mid-Size Battery

Authors: Roghieh A. Biroon, Zoleikha Abdollahi

Abstract:

The recent growth of renewables, e.g., solar panels, batteries, and electric vehicles (EVs) in residential and small commercial sectors, has potential impacts on the stability and operation of power grids. Considering approximately 50 percent share of the residential and the commercial sectors in the electricity demand market, the significance of these impacts, and the necessity of addressing them are more highlighted. Utilities and power system operators should manage the renewable electricity sources integration with power systems in such a way to extract the most possible advantages for the power systems. The most common effect of high penetration level of the renewables is the reverse power flow in the distribution feeders when the customers generate more power than their needs. The reverse power flow causes voltage rise and thermal issues in the power grids. To overcome the voltage rise issues in the distribution system, several techniques have been proposed including reducing transformers short circuit resistance and feeder impedance, installing autotransformers/voltage regulators along the line, absorbing the reactive power by distributed generators (DGs), and limiting the PV and battery sizes. In this study, we consider a medium-scale battery energy storage to manage the power energy and address the aforementioned issues on voltage deviation and power loss increase. We propose an optimization algorithm to find the optimum size and location for the battery. The optimization for the battery location and size is so that the battery maintains the feeder voltage deviation and power loss at a certain desired level. Moreover, the proposed optimization algorithm controls the charging/discharging profile of the battery to absorb the negative power flow from residential and commercial customers in the feeder during the peak time and sell the power back to the system during the off-peak time. The proposed battery regulates the voltage problem in the distribution system while it also can play frequency regulation role in islanded microgrids. This battery can be regulated and controlled by the utilities or a third-party ancillary service provider for the utilities to reduce the power system loss and regulate the distribution feeder voltage and frequency in standard level.

Keywords: ancillary services, battery, distribution system and optimization

Procedia PDF Downloads 129
376 Continuous FAQ Updating for Service Incident Ticket Resolution

Authors: Kohtaroh Miyamoto

Abstract:

As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use an FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such an FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months, we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.

Keywords: FAQ system, resolution time, service incident tickets, IT system maintenance

Procedia PDF Downloads 335
375 The Hallmarks of War Propaganda: The Case of Russia-Ukraine Conflict

Authors: Veronika Solopova, Oana-Iuliana Popescu, Tim Landgraf, Christoph Benzmüller

Abstract:

Beginning in 2014, slowly building geopolitical tensions in Eastern Europe led to a full-blown conflict between the Russian Federation and Ukraine that generated an unprecedented amount of news articles and data from social media data, reflecting the opposing ideologies and narratives as a background and the essence of the ongoing war. These polarized informational campaigns have led to countless mutual accusations of misinformation and fake news, shaping an atmosphere of confusion and mistrust for many readers all over the world. In this study, we analyzed scraped news articles from Ukrainian, Russian, Romanian and English-speaking news outlets, on the eve of 24th of February 2022, compared to day five of the conflict (28th of February), to see how the media influenced and mirrored the changes in public opinion. We also contrast the sources opposing and supporting the stands of the Russian government in Ukrainian, Russian and Romanian media spaces. In a data-driven way, we describe how the narratives are spread throughout Eastern and Central Europe. We present predictive linguistic features surrounding war propaganda. Our results indicate that there are strong similarities in terms of rhetoric strategies in the pro-Kremlin media in both Ukraine and Russia, which, while being relatively neutral according to surface structure, use aggressive vocabulary. This suggests that automatic propaganda identification systems have to be tailored for each new case, as they have to rely on situationally specific words. Both Ukrainian and Russian outlets lean towards strongly opinionated news, pointing towards the use of war propaganda in order to achieve strategic goals.

Keywords: linguistic, news, propaganda, Russia, ukraine

Procedia PDF Downloads 115
374 An Ancient Rule for Constructing Dodecagonal Quasi-Periodic Formations

Authors: Rima A. Ajlouni

Abstract:

The discovery of quasi-periodic structures in material science is revealing an exciting new class of symmetries, which has never been explored before. Due to their unique structural and visual properties, these symmetries are drawing interest from many scientific and design disciplines. Especially, in art and architecture, these symmetries can provide a rich source of geometry for exploring new patterns, forms, systems, and structures. However, the structural systems of these complicated symmetries are still posing a perplexing challenge. While much of their local order has been explored, the global governing system is still unresolved. Understanding their unique global long-range order is essential to their generation and application. The recent discovery of dodecagonal quasi-periodic patterns in historical Islamic architecture is generating a renewed interest into understanding the mathematical principles of traditional Islamic geometry. Astonishingly, many centuries before its description in the modern science, ancient artists, by using the most primitive tools (a compass and a straight edge), were able to construct patterns with quasi-periodic formations. These ancient patterns can be found all over the ancient Islamic world, many of which exhibit formations with 5, 8, 10 and 12 quasi-periodic symmetries. Based on the examination of these historical patterns and derived from the generating principles of Islamic geometry, a global multi-level structural model is presented that is able to describe the global long-range order of dodecagonal quasi-periodic formations in Islamic Architecture. Furthermore, this method is used to construct new quasi-periodic tiling systems as well as generating their deflation and inflation rules. This method can be used as a general guiding principle for constructing infinite patches of dodecagon-based quasi-periodic formations, without the need for local strategies (tiling, matching, grid, substitution, etc.) or complicated mathematics; providing an easy tool for scientists, mathematicians, teachers, designers and artists, to generate and study a wide range of dodecagonal quasi-periodic formations.

Keywords: dodecagonal, Islamic architecture, long-range order, quasi-periodi

Procedia PDF Downloads 401
373 Artificial Intelligence-Generated Previews of Hyaluronic Acid-Based Treatments

Authors: Ciro Cursio, Giulia Cursio, Pio Luigi Cursio, Luigi Cursio

Abstract:

Communication between practitioner and patient is of the utmost importance in aesthetic medicine: as of today, images of previous treatments are the most common tool used by doctors to describe and anticipate future results for their patients. However, using photos of other people often reduces the engagement of the prospective patient and is further limited by the number and quality of pictures available to the practitioner. Pre-existing work solves this issue in two ways: 3D scanning of the area with manual editing of the 3D model by the doctor or automatic prediction of the treatment by warping the image with hand-written parameters. The first approach requires the manual intervention of the doctor, while the second approach always generates results that aren’t always realistic. Thus, in one case, there is significant manual work required by the doctor, and in the other case, the prediction looks artificial. We propose an AI-based algorithm that autonomously generates a realistic prediction of treatment results. For the purpose of this study, we focus on hyaluronic acid treatments in the facial area. Our approach takes into account the individual characteristics of each face, and furthermore, the prediction system allows the patient to decide which area of the face she wants to modify. We show that the predictions generated by our system are realistic: first, the quality of the generated images is on par with real images; second, the prediction matches the actual results obtained after the treatment is completed. In conclusion, the proposed approach provides a valid tool for doctors to show patients what they will look like before deciding on the treatment.

Keywords: prediction, hyaluronic acid, treatment, artificial intelligence

Procedia PDF Downloads 111
372 Multi-Walled Carbon Nanotubes Doped Poly (3,4 Ethylenedioxythiophene) Composites Based Electrochemical Nano-Biosensor for Organophosphate Detection

Authors: Navpreet Kaur, Himkusha Thakur, Nirmal Prabhakar

Abstract:

One of the most publicized and controversial issue in crop production is the use of agrichemicals- also known as pesticides. This is evident in many reports that Organophosphate (OP) insecticides, among the broad range of pesticides are mainly involved in acute and chronic poisoning cases. Therefore, detection of OPs is very necessary for health protection, food and environmental safety. In our study, a nanocomposite of poly (3,4 ethylenedioxythiophene) (PEDOT) and multi-walled carbon nanotubes (MWCNTs) has been deposited electrochemically onto the surface of fluorine doped tin oxide sheets (FTO) for the analysis of malathion OP. The -COOH functionalization of MWCNTs has been done for the covalent binding with amino groups of AChE enzyme. The use of PEDOT-MWCNT films exhibited an excellent conductivity, enables fast transfer kinetics and provided a favourable biocompatible microenvironment for AChE, for the significant malathion OP detection. The prepared PEDOT-MWCNT/FTO and AChE/PEDOT-MWCNT/FTO nano-biosensors were characterized by Fourier transform infrared spectrometry (FTIR), Field emission-scanning electron microscopy (FE-SEM) and electrochemical studies. Electrochemical studies were done using Cyclic Voltammetry (CV) or Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS). Various optimization studies were done for different parameters including pH (7.5), AChE concentration (50 mU), substrate concentration (0.3 mM) and inhibition time (10 min). The detection limit for malathion OP was calculated to be 1 fM within the linear range 1 fM to 1 µM. The activity of inhibited AChE enzyme was restored to 98% of its original value by 2-pyridine aldoxime methiodide (2-PAM) (5 mM) treatment for 11 min. The oxime 2-PAM is able to remove malathion from the active site of AChE by means of trans-esterification reaction. The storage stability and reusability of the prepared nano-biosensor is observed to be 30 days and seven times, respectively. The application of the developed nano-biosensor has also been evaluated for spiked lettuce sample. Recoveries of malathion from the spiked lettuce sample ranged between 96-98%. The low detection limit obtained by the developed nano-biosensor made them reliable, sensitive and a low cost process.

Keywords: PEDOT-MWCNT, malathion, organophosphates, acetylcholinesterase, nano-biosensor, oxime (2-PAM)

Procedia PDF Downloads 429
371 Designing Self-Healing Lubricant-Impregnated Surfaces for Corrosion Protection

Authors: Sami Khan, Kripa Varanasi

Abstract:

Corrosion is a widespread problem in several industries and developing surfaces that resist corrosion has been an area of interest since the last several decades. Superhydrophobic surfaces that combine hydrophobic coatings along with surface texture have been shown to improve corrosion resistance by creating voids filled with air that minimize the contact area between the corrosive liquid and the solid surface. However, these air voids can incorporate corrosive liquids over time, and any mechanical faults such as cracks can compromise the coating and provide pathways for corrosion. As such, there is a need for self-healing corrosion-resistance surfaces. In this work, the anti-corrosion properties of textured surfaces impregnated with a lubricant have been systematically studied. Since corrosion resistance depends on the area and physico-chemical properties of the material exposed to the corrosive medium, lubricant-impregnated surfaces (LIS) have been designed based on the surface tension, viscosity and chemistry of the lubricant and its spreading coefficient on the solid. All corrosion experiments were performed in a standard three-electrode cell using iron, which readily corrodes in a 3.5% sodium chloride solution. In order to obtain textured iron surfaces, thin films (~500 nm) of iron were sputter-coated on silicon wafers textured using photolithography, and subsequently impregnated with lubricants. Results show that the corrosion rate on LIS is greatly reduced, and offers an over hundred-fold improvement in corrosion protection. Furthermore, it is found that the spreading characteristics of the lubricant are significant in ensuring corrosion protection: a spreading lubricant (e.g., Krytox 1506) that covers both inside the texture, as well as the top of the texture, provides a two-fold improvement in corrosion protection as compared to a non-spreading lubricant (e.g., Silicone oil) that does not cover texture tops. To enhance corrosion protection of surfaces coated with a non-spreading lubricant, pyramid-shaped textures have been developed that minimize exposure to the corrosive solution, and a consequent twenty-fold increased in corrosion protection is observed. An increase in viscosity of the lubricant scales with greater corrosion protection. Finally, an equivalent cell-circuit model is developed for the lubricant-impregnated systems using electrochemical impedance spectroscopy. Lubricant-impregnated surfaces find attractive applications in harsh corrosive environments, especially where the ability to self-heal is advantageous.

Keywords: lubricant-impregnated surfaces, self-healing surfaces, wettability, nano-engineered surfaces

Procedia PDF Downloads 131
370 On the Internal Structure of the ‘Enigmatic Electrons’

Authors: Natarajan Tirupattur Srinivasan

Abstract:

Quantum mechanics( QM) and (special) relativity (SR) have indeed revolutionized the very thinking of physicists, and the spectacular successes achieved over a century due to these two theories are mind-boggling. However, there is still a strong disquiet among some physicists. While the mathematical structure of these two theories has been established beyond any doubt, their physical interpretations are still being contested by many. Even after a hundred years of their existence, we cannot answer a very simple question, “What is an electron”? Physicists are struggling even now to come to grips with the different interpretations of quantum mechanics with all their ramifications. However, it is indeed strange that the (special) relativity theory of Einstein enjoys many orders of magnitude of “acceptance”, though both theories have their own stocks of weirdness in the results, like time dilation, mass increase with velocity, the collapse of the wave function, quantum jump, tunnelling, etc. Here, in this paper, it would be shown that by postulating an intrinsic internal motion to these enigmatic electrons, one can build a fairly consistent picture of reality, revealing a very simple picture of nature. This is also evidenced by Schrodinger’s ‘Zitterbewegung’ motion, about which so much has been written. This leads to a helical trajectory of electrons when they move in a laboratory frame. It will be shown that the helix is a three-dimensional wave having all the characteristics of our familiar 2D wave. Again, the helix, being a geodesic on an imaginary cylinder, supports ‘quantization’, and its representation is just the complex exponentials matching with the wave function of quantum mechanics. By postulating the instantaneous velocity of the electrons to be always ‘c’, the velocity of light, the entire relativity comes alive, and we can interpret the ‘time dilation’, ‘mass increase with velocity’, etc., in a very simple way. Thus, this model unifies both QM and SR without the need for a counterintuitive postulate of Einstein about the constancy of the velocity of light for all inertial observers. After all, if the motion of an inertial frame cannot affect the velocity of light, the converse that this constant also cannot affect the events in the frame must be true. But entire relativity is about how ‘c’ affects time, length, mass, etc., in different frames.

Keywords: quantum reconstruction, special theory of relativity, quantum mechanics, zitterbewegung, complex wave function, helix, geodesic, Schrodinger’s wave equations

Procedia PDF Downloads 69
369 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 153
368 Executive Functions Directly Associated with Severity of Perceived Pain above and beyond Depression in the Context of Medical Rehabilitation

Authors: O. Elkana, O Heyman, S. Hamdan, M. Franko, J. Vatine

Abstract:

Objective: To investigate whether a direct link exists between perceived pain (PP) and executive functions (EF), above and beyond the influence of depression symptoms, in the context of medical rehabilitation. Design: Cross-sectional study. Setting: Rehabilitation Hospital. Participants: 125 medical records of hospitalized patients were screened for matching to our inclusion criteria. Only 60 patients were found fit and were asked to participate. 19 decline to participate on personal basis. The 41 neurologically intact patients (mean age 46, SD 14.96) that participated in this study were in their sub-acute stage of recovery, with fluent Hebrew, with intact upper limb (to neutralize influence on psychomotor performances) and without an organic brain damage. Main Outcome Measures: EF were assessed using the Wisconsin Card Sorting Test (WCST) and the Stop-Signal Test (SST). PP was measured using 3 well-known pain questionnaires: Pain Disability Index (PDI), The Short-Form McGill Questionnaire (SF-MPQ) and the Pain Catastrophizing Scale (PCS). Perceived pain index (PPI) was calculated by the mean score composite from the 3 pain questionnaires. Depression symptoms were assessed using the Patient Health Questionnaire (PHQ-9). Results: The results indicate that irrespective of the presence of depression symptoms, PP is directly correlated with response inhibition (SST partial correlation: r=0.5; p=0.001) and mental flexibility (WSCT partial correlation: r=-0.37; p=0.021), suggesting decreased performance in EF as PP severity increases. High correlations were found between the 3 pain measurements: SF-MPQ with PDI (r=0.62, p<0.001), SF-MPQ with PCS (r=0.58, p<0.001) and PDI with PCS (r=0.38, p=0.016) and each questionnaire alone was also significantly associated with EF; thus, no specific questionnaires ‘pulled’ the results obtained by the general index (PPI). Conclusion: Examining the direct association between PP and EF, beyond the contribution of depression symptoms, provides further clinical evidence suggesting that EF and PP share underlying mediating neuronal mechanisms. Clinically, the importance of assessing patients' EF abilities as well as PP severity during rehabilitation is underscored.

Keywords: depression, executive functions, mental-flexibility, neuropsychology, pain perception, perceived pain, response inhibition

Procedia PDF Downloads 244
367 The Processing of Implicit Stereotypes in Everyday Scene Perception

Authors: Magali Mari, Fabrice Clement

Abstract:

The present study investigated the influence of implicit stereotypes on adults’ visual information processing, using an eye-tracking device. Implicit stereotyping is an automatic and implicit process; it happens relatively quickly, outside of awareness. In the presence of a member of a social group, a set of expectations about the characteristics of this social group appears automatically in people’s minds. The study aimed to shed light on the cognitive processes involved in stereotyping and to further investigate the use of eye movements to measure implicit stereotypes. With an eye-tracking device, the eye movements of participants were analyzed, while they viewed everyday scenes depicting women and men in congruent or incongruent gender role activities (e.g., a woman ironing or a man ironing). The settings of these scenes had to be analyzed to infer the character’s role. Also, participants completed an implicit association test that combined the concept of gender with attributes of occupation (home/work), while measuring reaction times to assess participants’ implicit stereotypes about gender. The results showed that implicit stereotypes do influence people’s visual attention; within a fraction of a second, the number of returns, between stereotypical and counter-stereotypical scenes, differed significantly, meaning that participants interpreted the scene itself as a whole before identifying the character. They predicted that, in such a situation, the character was supposed to be a woman or a man. Also, the study showed that eye movements could be used as a fast and reliable supplement for traditional implicit association tests to measure implicit stereotypes. Altogether, this research provides further understanding of implicit stereotypes processing as well as a natural method to study implicit stereotypes.

Keywords: eye-tracking, implicit stereotypes, social cognition, visual attention

Procedia PDF Downloads 155