Search results for: computer applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8253

Search results for: computer applications

1383 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: mininet, OpenFlow, POX controller, SDN

Procedia PDF Downloads 228
1382 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 344
1381 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 257
1380 Preparation of Sorbent Materials for the Removal of Hardness and Organic Pollutants from Water and Wastewater

Authors: Thanaa Abdel Moghny, Mohamed Keshawy, Mahmoud Fathy, Abdul-Raheim M. Abdul-Raheim, Khalid I. Kabel, Ahmed F. El-Kafrawy, Mahmoud Ahmed Mousa, Ahmed E. Awadallah

Abstract:

Ecological pollution is of great concern for human health and the environment. Numerous organic and inorganic pollutants usually discharged into the water caused carcinogenic or toxic effect for human and different life form. In this respect, this work aims to treat water contaminated by organic and inorganic waste using sorbent based on polystyrene. Therefore, two different series of adsorbent material were prepared; the first one included the preparation of polymeric sorbent from the reaction of styrene acrylate ester and alkyl acrylate. The second series involved syntheses of composite ion exchange resins of waste polystyrene and   amorphous carbon thin film (WPS/ACTF) by solvent evaporation using micro emulsion polymerization. The produced ACTF/WPS nanocomposite was sulfonated to produce cation exchange resins ACTF/WPSS nanocomposite. The sorbents of the first series were characterized using FTIR, 1H NMR, and gel permeation chromatography. The thermal properties of the cross-linked sorbents were investigated using thermogravimetric analysis, and the morphology was characterized by scanning electron microscope (SEM). The removal of organic pollutant was determined through absorption tests in a various organic solvent. The chemical and crystalline structure of nanocomposite of second series has been proven by studies of FTIR spectrum, X-rays, thermal analysis, SEM and TEM analysis to study morphology of resins and ACTF that assembled with polystyrene chain. It is found that the composite resins ACTF/WPSS are thermally stable and show higher chemical stability than ion exchange WPSS resins. The composite resin was evaluated for calcium hardness removal. The result is evident that the ACTF/WPSS composite has more prominent inorganic pollutant removal than WPSS resin. So, we recommend the using of nanocomposite resin as new potential applications for water treatment process.

Keywords: nanocomposite, sorbent materials, waste water, waste polystyrene

Procedia PDF Downloads 422
1379 An End-to-end Piping and Instrumentation Diagram Information Recognition System

Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha

Abstract:

Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.

Keywords: object recognition system, P&ID, symbol recognition, text recognition

Procedia PDF Downloads 147
1378 Profiling the Volatile Metabolome in Pear Leaves with Different Resistance to the Pear Psylla Cacopsylla bidens (Sulc) and Characterization of Phenolic Acid Decarboxylase

Authors: Mwafaq Ibdah, Mossab, Yahyaa, Dor Rachmany, Yoram Gerchman, Doron Holland, Liora Shaltiel-Harpaz

Abstract:

Pear Psylla is the most important pest of pear in all pear-growing regions, in Asian, European, and the USA. Pear psylla damages pears in several ways: high-density populations of these insects can cause premature leaf and fruit drop, diminish plant growth, and reduce fruit size. In addition, their honeydew promotes sooty mold on leaves and russeting on fruit. Pear psyllas are also considered vectors of pear pathogens such as Candidatus Phytoplasma pyri causing pear decline that can lead to loss of crop and tree vigor, and sometimes loss of trees. Psylla control is a major obstacle to efficient integrated pest management. Recently we have identified two naturally resistance pear accessions (Py.760-261 and Py.701-202) in the Newe Ya’ar live collection. GC-MS volatile metabolic profiling identified several volatile compounds common in these accessions but lacking, or much less common, in a sensitive accession, the commercial Spadona variety. Among these volatiles were styrene and its derivatives. When the resistant accessions were used as inter-stock, the volatile compounds appear in commercial Spadona scion leaves, and it showed reduced susceptibility to pear psylla. Laboratory experiments and applications of some of these volatile compounds were very effective against psylla eggs, nymphs, and adults. The genes and enzymes involved in the specific reactions that lead to the biosynthesis of styrene in plant are unknown. We have identified a phenolic acid decarboxylase that catalyzes the formation of p-hydroxystyrene, which occurs as a styrene analog in resistant pear genotypes. The His-tagged and affinity chromatography purified E. coli-expressed pear PyPAD1 protein could decarboxylate p-coumaric acid and ferulic acid to p-hydroxystyrene and 3-methoxy-4-hydroxystyrene. In addition, PyPAD1 had the highest activity toward p-coumaric acid. Expression analysis of the PyPAD gene revealed that its expressed as expected, i.e., high when styrene levels and psylla resistance were high.

Keywords: pear Psylla, volatile, GC-MS, resistance

Procedia PDF Downloads 136
1377 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 325
1376 Developing an Exhaustive and Objective Definition of Social Enterprise through Computer Aided Text Analysis

Authors: Deepika Verma, Runa Sarkar

Abstract:

One of the prominent debates in the social entrepreneurship literature has been to establish whether entrepreneurial work for social well-being by for-profit organizations can be classified as social entrepreneurship or not. Of late, the scholarship has reached a consensus. It concludes that there seems little sense in confining social entrepreneurship to just non-profit organizations. Boosted by this research, increasingly a lot of businesses engaged in filling the social infrastructure gaps in developing countries are calling themselves social enterprise. These organizations are diverse in their ownership, size, objectives, operations and business models. The lack of a comprehensive definition of social enterprise leads to three issues. Firstly, researchers may face difficulty in creating a database for social enterprises because the choice of an entity as a social enterprise becomes subjective or based on some pre-defined parameters by the researcher which is not replicable. Secondly, practitioners who use ‘social enterprise’ in their vision/mission statement(s) may find it difficult to adjust their business models accordingly especially during the times when they face the dilemma of choosing social well-being over business viability. Thirdly, social enterprise and social entrepreneurship attract a lot of donor funding and venture capital. In the paucity of a comprehensive definitional guide, the donors or investors may find assigning grants and investments difficult. It becomes necessary to develop an exhaustive and objective definition of social enterprise and examine whether the understanding of the academicians and practitioners about social enterprise match. This paper develops a dictionary of words often associated with social enterprise or (and) social entrepreneurship. It further compares two lexicographic definitions of social enterprise imputed from the abstracts of academic journal papers and trade publications extracted from the EBSCO database using the ‘tm’ package in R software.

Keywords: EBSCO database, lexicographic definition, social enterprise, text mining

Procedia PDF Downloads 384
1375 Structural Development and Multiscale Design Optimization of Additively Manufactured Unmanned Aerial Vehicle with Blended Wing Body Configuration

Authors: Malcolm Dinovitzer, Calvin Miller, Adam Hacker, Gabriel Wong, Zach Annen, Padmassun Rajakareyar, Jordan Mulvihill, Mostafa S.A. ElSayed

Abstract:

The research work presented in this paper is developed by the Blended Wing Body (BWB) Unmanned Aerial Vehicle (UAV) team, a fourth-year capstone project at Carleton University Department of Mechanical and Aerospace Engineering. Here, a clean sheet UAV with BWB configuration is designed and optimized using Multiscale Design Optimization (MSDO) approach employing lattice materials taking into consideration design for additive manufacturing constraints. The BWB-UAV is being developed with a mission profile designed for surveillance purposes with a minimum payload of 1000 grams. To demonstrate the design methodology, a single design loop of a sample rib from the airframe is shown in details. This includes presentation of the conceptual design, materials selection, experimental characterization and residual thermal stress distribution analysis of additively manufactured materials, manufacturing constraint identification, critical loads computations, stress analysis and design optimization. A dynamic turbulent critical load case was identified composed of a 1-g static maneuver with an incremental Power Spectral Density (PSD) gust which was used as a deterministic design load case for the design optimization. 2D flat plate Doublet Lattice Method (DLM) was used to simulate aerodynamics in the aeroelastic analysis. The aerodynamic results were verified versus a 3D CFD analysis applying Spalart-Allmaras and SST k-omega turbulence to the rigid UAV and vortex lattice method applied in the OpenVSP environment. Design optimization of a single rib was conducted using topology optimization as well as MSDO. Compared to a solid rib, weight savings of 36.44% and 59.65% were obtained for the topology optimization and the MSDO, respectively. These results suggest that MSDO is an acceptable alternative to topology optimization in weight critical applications while preserving the functional requirements.

Keywords: blended wing body, multiscale design optimization, additive manufacturing, unmanned aerial vehicle

Procedia PDF Downloads 359
1374 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation

Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga

Abstract:

Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.

Keywords: classification, coastline, color, sea-land segmentation

Procedia PDF Downloads 237
1373 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 166
1372 Removal of Heavy Metals from Municipal Wastewater Using Constructed Rhizofiltration System

Authors: Christine A. Odinga, G. Sanjay, M. Mathew, S. Gupta, F. M. Swalaha, F. A. O. Otieno, F. Bux

Abstract:

Wastewater discharged from municipal treatment plants contain an amalgamation of trace metals. The presence of metal pollutants in wastewater poses a huge challenge to the choice and applications of the preferred treatment method. Conventional treatment methods are inefficient in the removal of trace metals due to their design approach. This study evaluated the treatment performance of a constructed rhizofiltration system in the removal of heavy metals from municipal wastewater. The study was conducted at an eThekwni municipal wastewater treatment plant in Kingsburgh - Durban in the province of KwaZulu-Natal. The construction details of the pilot-scale rhizofiltration unit included three different layers of substrate consisting of medium stones, coarse gravel and fine sand. The system had one section planted with Phragmites australis L. and Kyllinga nemoralis L. while the other section was unplanted and acted as the control. Influent, effluent and sediment from the system were sampled and assessed for the presence of and removal of selected trace heavy metals using standard methods. Efficiency of metals removal was established by gauging the transfer of metals into leaves, roots and stem of the plants by calculations based on standard statistical packages. The Langmuir model was used to assess the heavy metal adsorption mechanisms of the plants. Heavy metals were accumulated in the entire rhizofiltration system at varying percentages of 96.69% on planted and 48.98% on control side for cadmium. Chromium was 81% and 24%, Copper was 23.4% and 1.1%, Nickel was 72% and 46.5, Lead was 63% and 31%, while Zinc was 76% and 84% on the on the water and sediment of the planted and control sides of the rhizofilter respectively. The decrease in metal adsorption efficiencies on the planted side followed the pattern of Cd>Cr>Zn>Ni>Pb>Cu and Ni>Cd>Pb>Cr>Cu>Zn on the control side. Confirmatory analysis using Electron Scanning Microscopy revealed that higher amounts of metals was deposited in the root system with values ranging from 0.015mg/kg (Cr), 0.250 (Cu), 0.030 (Pb) for P. australis, and 0.055mg/kg (Cr), 0.470mg/kg (Cu) and 0.210mg/kg,(Pb) for K. nemoralis respectively. The system was found to be efficient in removing and reducing metals from wastewater and further research is necessary to establish the immediate mechanisms that the plants display in order to achieve these reductions.

Keywords: wastewater treatment, Phragmites australis L., Kyllinga nemoralis L., heavy metals, pathogens, rhizofiltration

Procedia PDF Downloads 255
1371 Synthesis and Prediction of Activity Spectra of Substances-Assisted Evaluation of Heterocyclic Compounds Containing Hydroquinoline Scaffolds

Authors: Gizachew Mulugeta Manahelohe, Khidmet Safarovich Shikhaliev

Abstract:

There has been a significant surge in interest in the synthesis of heterocyclic compounds that contain hydroquinoline fragments. This surge can be attributed to the broad range of pharmaceutical and industrial applications that these compounds possess. The present study provides a comprehensive account of the synthesis of both linear and fused heterocyclic systems that incorporate hydroquinoline fragments. Furthermore, the pharmacological activity spectra of the synthesized compounds were assessed using the in silico method, employing the prediction of activity spectra of substances (PASS) program. Hydroquinoline nitriles 7 and 8 were prepared through the reaction of the corresponding hydroquinolinecarbaldehyde using a hydroxylammonium chloride/pyridine/toluene system and iodine in aqueous ammonia under ambient conditions, respectively. 2-Phenyl-1,3-oxazol-5(4H)-ones 9a,b and 10a,b were synthesized via the condensation of compounds 5a,b and 6a,b with hippuric acid in acetic acid in 30–60% yield. When activated, 7-methylazolopyrimidines 11a and b were reacted with N-alkyl-2,2,4-trimethyl-1,2,3,4-tetrahydroquinoline-6-carbaldehydes 6a and b, and triazolo/pyrazolo[1,5-a]pyrimidin-6-yl carboxylic acids 12a and b were obtained in 60–70% yield. The condensation of 7-hydroxy-1,2,3,4-tetramethyl-1,2-dihydroquinoline 3 h with dimethylacetylenedicarboxylate (DMAD) and ethyl acetoacetate afforded cyclic products 16 and 17, respectively. The condensation reaction of 6-formyl-7-hydroxy-1,2,2,4-tetramethyl-1,2-dihydroquinoline 5e with methylene-active compounds such as ethyl cyanoacetate/dimethyl-3-oxopentanedioate/ethyl acetoacetate/diethylmalonate/Meldrum’s acid afforded 3-substituted coumarins containing dihydroquinolines 19 and 21. Pentacyclic coumarin 22 was obtained via the random condensation of malononitrile with 5e in the presence of a catalytic amount of piperidine in ethanol. The biological activities of the synthesized compounds were assessed using the PASS program. Based on the prognosis, compounds 13a, b, and 14 exhibited a high likelihood of being active as inhibitors of gluconate 2-dehydrogenase, as well as possessing antiallergic, antiasthmatic, and antiarthritic properties, with a probability value (Pa) ranging from 0.849 to 0.870. Furthermore, it was discovered that hydroquinoline carbonitriles 7 and 8 tended to act as effective progesterone antagonists and displayed antiallergic, antiasthmatic, and antiarthritic effects (Pa = 0.276–0.827). Among the hydroquinolines containing coumarin moieties, compounds 17, 19a, and 19c were predicted to be potent progesterone antagonists, with Pa values of 0.710, 0.630, and 0.615, respectively.

Keywords: heterocyclic compound, hydroquinoline, Vilsmeier–Haack formulation, quinolone

Procedia PDF Downloads 32
1370 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 146
1369 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 130
1368 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy

Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed

Abstract:

The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the film

Keywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery

Procedia PDF Downloads 487
1367 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 142
1366 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 315
1365 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions

Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira

Abstract:

Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).

Keywords: change management, technology acceptance model, organizational change, health and safety

Procedia PDF Downloads 37
1364 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method

Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa

Abstract:

Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.

Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al

Procedia PDF Downloads 327
1363 The Routine Use of a Negative Pressure Incision Management System in Vascular Surgery: A Case Series

Authors: Hansraj Bookun, Angela Tan, Rachel Xuan, Linheng Zhao, Kejia Wang, Animesh Singla, David Kim, Christopher Loupos

Abstract:

Introduction: Incisional wound complications in vascular surgery patients represent a significant clinical and econometric burden of morbidity and mortality. The objective of this study was to trial the feasibility of applying the Prevena negative pressure incision management system as a routine dressing in patients who had undergone arterial surgery. Conventionally, Prevena has been applied to groin incisions, but this study features applications on multiple wound sites such as the thigh or major amputation stumps. Method: This was a cross-sectional observational, single-centre case series of 12 patients who had undergone major vascular surgery. Their wounds were managed with the Prevena system being applied either intra-operatively or on the first post-operative day. Demographic and operative details were collated as well as the length of stay and complication rates. Results: There were 9 males (75%) with mean age of 66 years and the comorbid burden was as follows: ischaemic heart disease (92%), diabetes (42%), hypertension (100%), stage 4 or greater kidney impairment (17%) and current or ex-smoking (83%). The main indications were acute ischaemia (33%), claudication (25%), and gangrene (17%). There were single instances of an occluded popliteal artery aneurysm, diabetic foot infection, and rest pain. The majority of patients (50%) had hybrid operations with iliofemoral endarterectomies, patch arterioplasties, and further peripheral endovascular treatment. There were 4 complex arterial bypass operations and 2 major amputations. The mean length of stay was 17 ± 10 days, with a range of 4 to 35 days. A single complication, in the form of a lymphocoele, was encountered in the context of an iliofemoral endarterectomy and patch arterioplasty. This was managed conservatively. There were no deaths. Discussion: The Prevena wound management system shows that in conjunction with safe vascular surgery, absolute wound complication rates remain low and that it remains a valuable adjunct in the treatment of vasculopaths.

Keywords: wound care, negative pressure, vascular surgery, closed incision

Procedia PDF Downloads 121
1362 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 270
1361 Investigation of Poly P-Dioxanone as Promising Biodegradable Polymer for Short-Term Medical Application

Authors: Stefanie Ficht, Lukas Schübel, Magdalena Kleybolte, Markus Eblenkamp, Jana Steger, Dirk Wilhelm, Petra Mela

Abstract:

Although 3D printing as transformative technology has become of increasing interest in the medical field and the demand for biodegradable polymers has developed to a considerable extent, there are only a few additively manufactured, biodegradable implants on the market. Additionally, the sterilization of such implants and its side effects on degradation have still not been sufficiently studied. Within this work, thermosensitive poly p-dioxanone (PPDO) samples were printed with fused filament fabrication (FFF) and investigated. Subsequently, H₂O₂ plasma and gamma radiation were used as low-temperature sterilization techniques and compared among each other and the control group (no sterilization). In order to assess the effect of different sterilization on the degradation behavior of PPDO, the samples were immersed in phosphate-buffered solution (PBS) over 28 days, and surface morphology, thermal properties, molecular weight, inherent viscosity, and mechanical properties were examined at regular time intervals. The study demonstrates that PPDO was printed with great success and that thermal properties, molecular weight (Mw), and inherent viscosity (IV) were not significantly affected by the printing process itself. H₂O₂ plasma sterilization did not significantly harm the thermosensitive polymer, while gamma radiation lowered IV and Mw statistically significantly compared to the control group (p < 0.001). During immersion in PBS, a decrease in Mw and mechanical strength occurred for all samples. However, gamma sterilized samples were affected to a much higher extent compared to the two other sample groups both in final values and timeline. This was confirmed by scanning electron microscopy showing no changes of surface morphology of (non-sterilized) control samples, first microcracks appearing on plasma sterilized samples after two weeks while being present on gamma sterilized samples already immediately after radiation to then further deteriorate over immersion duration. To conclude, we demonstrated that FFF and H₂O₂ plasma sterilization are well suited for processing thermosensitive, biodegradable polymers used for the development of innovative short-term medical applications.

Keywords: additive manufacturing, sterilization, biodegradable, thermosensitive, medical application

Procedia PDF Downloads 117
1360 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 88
1359 Influence of Genotype, Explant, and Hormone Treatment on Agrobacterium-Transformation Success in Salix Callus Culture

Authors: Lukas J. Evans, Danilo D. Fernando

Abstract:

Shrub willows (Salix spp.) have many characteristics which make them suitable for a variety of applications such as riparian zone buffers, environmental contaminant sequestration, living snow fences, and biofuel production. In some cases, these functions are limited due to physical or financial obstacles associated with the number of individuals needed to reasonably satisfy that purpose. One way to increase the efficiency of willows is to bioengineer them with the genetic improvements suitable for the desired use. To accomplish this goal, an optimized in vitro transformation protocol via Agrobacterium tumefaciens is necessary to reliably express genes of interest. Therefore, the aim of this study is to observe the influence of tissue culture with different willow cultivars, hormones, and explants on the percentage of calli expressing reporter gene green florescent protein (GFP) to find ideal transformation conditions. Each callus was produced from 1 month old open-pollinated seedlings of three Salix miyabeana cultivars (‘SX61’, ‘WT1’, and ‘WT2’) from three different explants (lamina, petiole, and internodes). Explants were cultured for 1 month on an MS media with different concentrations of 6-Benzylaminopurine (BAP) and 1-Naphthaleneacetic acid (NAA) (No hormones, 1 mg⁻¹L BAP only, 3 mg⁻¹L NAA only, 1 mg⁻¹L BAP and 3 mg⁻¹L NAA, and 3 mg⁻¹L BAP and 1 mg⁻¹L NAA) to produce a callus. Samples were then treated with Agrobacterium tumefaciens at an OD600 of 0.6-0.8 to insert the transgene GFP for 30 minutes, co-cultivated for 72 hours, and selected on the same media type they were cultured on with added 7.5 mg⁻¹L of Hygromycin for 1 week before GFP visualization under a UV dissecting scope. Percentage of GFP expressing calli as well as the average number of fluorescing GFP units per callus were recorded and results were evaluated through an ANOVA test (α = 0.05). The WT1 internode-derived calli on media with 3 mg-1L NAA+1 mg⁻¹L BAP and mg⁻¹L BAP alone produced a significantly higher percentage of GFP expressing calli than each other group (19.1% and 19.4%, respectively). Additionally, The WT1 internode group cultured with 3 mg⁻¹L NAA+1 mg⁻¹L BAP produced an average of 2.89 GFP units per callus while the group cultivated with 1 mg⁻¹L BAP produced an average of 0.84 GFP units per callus. In conclusion, genotype, explant choice, and hormones all play a significant role in increasing successful transformation in willows. Future studies to produce whole callus GFP expression and subsequent plantlet regeneration are necessary for a complete willow transformation protocol.

Keywords: agrobacterium, callus, Salix, tissue culture

Procedia PDF Downloads 118
1358 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 107
1357 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia

Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak

Abstract:

In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.

Keywords: data security, flow cytometry, leukaemia, telematics platform, telemedicine

Procedia PDF Downloads 975
1356 Automated Detection of Targets and Retrieve the Corresponding Analytics Using Augmented Reality

Authors: Suvarna Kumar Gogula, Sandhya Devi Gogula, P. Chanakya

Abstract:

Augmented reality is defined as the collection of the digital (or) computer generated information like images, audio, video, 3d models, etc. and overlay them over the real time environment. Augmented reality can be thought as a blend between completely synthetic and completely real. Augmented reality provides scope in a wide range of industries like manufacturing, retail, gaming, advertisement, tourism, etc. and brings out new dimensions in the modern digital world. As it overlays the content, it makes the users enhance the knowledge by providing the content blended with real world. In this application, we integrated augmented reality with data analytics and integrated with cloud so the virtual content will be generated on the basis of the data present in the database and we used marker based augmented reality where every marker will be stored in the database with corresponding unique ID. This application can be used in wide range of industries for different business processes, but in this paper, we mainly focus on the marketing industry which helps the customer in gaining the knowledge about the products in the market which mainly focus on their prices, customer feedback, quality, and other benefits. This application also focuses on providing better market strategy information for marketing managers who obtain the data about the stocks, sales, customer response about the product, etc. In this paper, we also included the reports from the feedback got from different people after the demonstration, and finally, we presented the future scope of Augmented Reality in different business processes by integrating with new technologies like cloud, big data, artificial intelligence, etc.

Keywords: augmented reality, data analytics, catch room, marketing and sales

Procedia PDF Downloads 229
1355 The Dangers of Attentional Inertia in the Driving Task

Authors: Catherine Thompson, Maryam Jalali, Peter Hills

Abstract:

The allocation of visual attention is critical when driving and anything that limits attention will have a detrimental impact on safety. Engaging in a secondary task reduces the amount of attention directed to the road because drivers allocate resources towards this task, leaving fewer resources to process driving-relevant information. Yet the dangers associated with a secondary task do not end when the driver returns their attention to the road. Instead, the attentional settings adopted to complete a secondary task may persist to the road, affecting attention, and therefore affecting driver performance. This 'attentional inertia' effect was investigated in the current work. Forty drivers searched for hazards in driving video clips while their eye-movements were recorded. At varying intervals they were instructed to attend to a secondary task displayed on a tablet situated to their left-hand side. The secondary task consisted of three separate computer games that induced horizontal, vertical, and random eye movements. Visual search and hazard detection in the driving clips were compared across the three conditions of the secondary task. Results showed that the layout of information in the secondary task, and therefore the allocation of attention in this task, had an impact on subsequent search in the driving clips. Vertically presented information reduced the wide horizontal spread of search usually associated with accurate driving and had a negative influence on the detection of hazards. The findings show the additional dangers of engaging in a secondary task while driving. The attentional inertia effect has significant implications for semi-autonomous and autonomous vehicles in which drivers have greater opportunity to direct their attention away from the driving task.

Keywords: attention, eye-movements, hazard perception, visual search

Procedia PDF Downloads 159
1354 Stretchable and Flexible Thermoelectric Polymer Composites for Self-Powered Volatile Organic Compound Vapors Detection

Authors: Petr Slobodian, Pavel Riha, Jiri Matyas, Robert Olejnik, Nuri Karakurt

Abstract:

Thermoelectric devices generate an electrical current when there is a temperature gradient between the hot and cold junctions of two dissimilar conductive materials typically n-type and p-type semiconductors. Consequently, also the polymeric semiconductors composed of polymeric matrix filled by different forms of carbon nanotubes with proper structural hierarchy can have thermoelectric properties which temperature difference transfer into electricity. In spite of lower thermoelectric efficiency of polymeric thermoelectrics in terms of the figure of merit, the properties as stretchability, flexibility, lightweight, low thermal conductivity, easy processing, and low manufacturing cost are advantages in many technological and ecological applications. Polyethylene-octene copolymer based highly elastic composites filled with multi-walled carbon nanotubes (MWCTs) were prepared by sonication of nanotube dispersion in a copolymer solution followed by their precipitation pouring into non-solvent. The electronic properties of MWCNTs were moderated by different treatment techniques such as chemical oxidation, decoration by Ag clusters or addition of low molecular dopants. In this concept, for example, the amounts of oxygenated functional groups attached on MWCNT surface by HNO₃ oxidation increase p-type charge carriers. p-type of charge carriers can be further increased by doping with molecules of triphenylphosphine. For partial altering p-type MWCNTs into less p-type ones, Ag nanoparticles were deposited on MWCNT surface and then doped with 7,7,8,8-tetracyanoquino-dimethane. Both types of MWCNTs with the highest difference in generated thermoelectric power were combined to manufacture polymeric based thermoelectric module generating thermoelectric voltage when the temperature difference is applied between hot and cold ends of the module. Moreover, it was found that the generated voltage by the thermoelectric module at constant temperature gradient was significantly affected when exposed to vapors of different volatile organic compounds representing then a self-powered thermoelectric sensor for chemical vapor detection.

Keywords: carbon nanotubes, polymer composites, thermoelectric materials, self-powered gas sensor

Procedia PDF Downloads 144