Search results for: discourse processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4644

Search results for: discourse processing

2814 Path-Spin to Spin-Spin Hybrid Quantum Entanglement: A Conversion Protocol

Authors: Indranil Bayal, Pradipta Panchadhyayee

Abstract:

Path-spin hybrid entanglement generated and confined in a single spin-1/2 particle is converted to spin-spin hybrid interparticle entanglement, which finds its important applications in quantum information processing. This protocol uses beam splitter, spin flipper, spin measurement, classical channel, unitary transformations, etc., and requires no collective operation on the pair of particles whose spin variables share complete entanglement after the accomplishment of the protocol. The specialty of the protocol lies in the fact that the path-spin entanglement is transferred between spin degrees of freedom of two separate particles initially possessed by a single party.

Keywords: entanglement, path-spin entanglement, spin-spin entanglement, CNOT operation

Procedia PDF Downloads 180
2813 Detection Characteristics of the Random and Deterministic Signals in Antenna Arrays

Authors: Olesya Bolkhovskaya, Alexey Davydov, Alexander Maltsev

Abstract:

In this paper approach to incoherent signal detection in multi-element antenna array are researched and modeled. Two types of useful signals with unknown wavefront were considered. First one is deterministic (Barker code), the second one is random (Gaussian distribution). The derivation of the sufficient statistics took into account the linearity of the antenna array. The performance characteristics and detecting curves are modeled and compared for different useful signals parameters and for different number of elements of the antenna array. Results of researches in case of some additional conditions can be applied to a digital communications systems.

Keywords: antenna array, detection curves, performance characteristics, quadrature processing, signal detection

Procedia PDF Downloads 395
2812 Building Data Infrastructure for Public Use and Informed Decision Making in Developing Countries-Nigeria

Authors: Busayo Fashoto, Abdulhakeem Shaibu, Justice Agbadu, Samuel Aiyeoribe

Abstract:

Data has gone from just rows and columns to being an infrastructure itself. The traditional medium of data infrastructure has been managed by individuals in different industries and saved on personal work tools; one of such is the laptop. This hinders data sharing and Sustainable Development Goal (SDG) 9 for infrastructure sustainability across all countries and regions. However, there has been a constant demand for data across different agencies and ministries by investors and decision-makers. The rapid development and adoption of open-source technologies that promote the collection and processing of data in new ways and in ever-increasing volumes are creating new data infrastructure in sectors such as lands and health, among others. This paper examines the process of developing data infrastructure and, by extension, a data portal to provide baseline data for sustainable development and decision making in Nigeria. This paper employs the FAIR principle (Findable, Accessible, Interoperable, and Reusable) of data management using open-source technology tools to develop data portals for public use. eHealth Africa, an organization that uses technology to drive public health interventions in Nigeria, developed a data portal which is a typical data infrastructure that serves as a repository for various datasets on administrative boundaries, points of interest, settlements, social infrastructure, amenities, and others. This portal makes it possible for users to have access to datasets of interest at any point in time at no cost. A skeletal infrastructure of this data portal encompasses the use of open-source technology such as Postgres database, GeoServer, GeoNetwork, and CKan. These tools made the infrastructure sustainable, thus promoting the achievement of SDG 9 (Industries, Innovation, and Infrastructure). As of 6th August 2021, a wider cross-section of 8192 users had been created, 2262 datasets had been downloaded, and 817 maps had been created from the platform. This paper shows the use of rapid development and adoption of technologies that facilitates data collection, processing, and publishing in new ways and in ever-increasing volumes. In addition, the paper is explicit on new data infrastructure in sectors such as health, social amenities, and agriculture. Furthermore, this paper reveals the importance of cross-sectional data infrastructures for planning and decision making, which in turn can form a central data repository for sustainable development across developing countries.

Keywords: data portal, data infrastructure, open source, sustainability

Procedia PDF Downloads 78
2811 Grid Pattern Recognition and Suppression in Computed Radiographic Images

Authors: Igor Belykh

Abstract:

Anti-scatter grids used in radiographic imaging for the contrast enhancement leave specific artifacts. Those artifacts may be visible or may cause Moiré effect when a digital image is resized on a diagnostic monitor. In this paper, we propose an automated grid artifacts detection and suppression algorithm which is still an actual problem. Grid artifacts detection is based on statistical approach in spatial domain. Grid artifacts suppression is based on Kaiser bandstop filter transfer function design and application avoiding ringing artifacts. Experimental results are discussed and concluded with description of advantages over existing approaches.

Keywords: grid, computed radiography, pattern recognition, image processing, filtering

Procedia PDF Downloads 262
2810 Fermentation of Tolypocladium inflatum to Produce Cyclosporin in Dairy Waste Culture Medium

Authors: Fereshteh Falah, Alireza Vasiee, Farideh Tabatabaei-Yazdi

Abstract:

In this research, we investigated the usage of dairy sludge in the fermentation process and cyclosporin production. This bioactive compound is a metabolite produced by Tolypocladium inflatum. Results showed that about 200 ppm of cyclosporin can be produced in this fermentation. In order to have a proper and specific function, CyA must be free of any impurities, so we need purification. In this downstream processing, we used chromatographic extraction and evaluation of pharmacological activities of cyA. Results showed that the obtained metabolite has very high activity against Aspergilus niger (25mm clear zone). This cyclosporin was isolated for use as an antibiotic. The current research shows that this drug is very vital and commercially very important.

Keywords: fermentation, cyclosporin A, Tolypocladium inflatum, TLC

Procedia PDF Downloads 106
2809 The Composition and Activity of Germinated Broccoli Seeds and Their Extract

Authors: Boris Nemzer, Tania Reyes-Izquierdo, Zbigniew Pietrzkowski

Abstract:

Glucosinolate is a family of glucosides that can be found in a family of brassica vegetables. Upon the damage of the plant, glucosinolate breakdown by an internal enzyme myrosinase (thioglucosidase; EC 3.2.3.1) into isothiocyanates, such as sulforaphane. Sulforaphane is formed by glucoraphanin cleaving the sugar off by myrosinase and rearranged. Sulforaphane nitrile is formed in the same reaction as sulforaphane with the active of epithiospecifier protein (ESP). Most common food processing procedure would break the plant and mix the glucoraphanin and myrosinase together, and the formed sulforaphane would be further degraded. The purpose of this study is to understand the glucoraphanin/sulforaphane and the myrosinase activity of broccoli seeds germinated at a different time and technological processing conditions that keep the activity of the enzyme to form sulforaphane. Broccoli seeds were germinated in the house. Myrosinase activities were tested as the glucose content using glucose assay kit and measured UV-Vis spectrophotometer. Glucosinolates were measured by HPLC/DAD. Sulforaphane was measured using HPLC-DAD and GC/MS. The 6 hr germinated sprouts have a myrosinase activity 32.2 mg glucose/g, which is comparable with 12 and 24 hour germinated seeds and higher than dry seeds. The glucoraphanin content in 6 hour germinated sprouts is 13935 µg/g which is comparable to 24 hour germinated seeds and lower than the dry seeds. GC/MS results show that the amount of sulforaphane is higher than the amount of sulforaphane nitrile in seeds, 6 hour and 24 hour germinated seeds. The ratio of sulforaphane and sulforaphane nitrile is high in 6 hour germinated seeds, which indicates the inactivated ESP in the reaction. After evaluating the results, the short time germinated seeds can be used as the source of glucoraphanin and myrosinase supply to form potential higher sulforaphane content. Broccoli contains glucosinolates, glucoraphanin (4-methylsulfinylbutyl glucosinolate), which is an important metabolite with health-promoting effects. In the pilot clinical study, we observed the effects of a glucosinolates/glucoraphanin-rich extract from short time germinated broccoli seeds on blood adenosine triphosphate (ATP), reactive oxygen species (ROS) and lactate levels. A single dose of 50 mg of broccoli sprouts extract increased blood levels of ATP up to 61% (p=0.0092) during the first 2 hours after the ingestion. Interestingly, this effect was not associated with an increase in blood ROS or lactate. When compared to the placebo group, levels of lactate were reduced by 10% (p=0.006). These results indicate that broccoli germinated seed extract may positively affect the generation of ATP in humans. Due to the preliminary nature of this work and promising results, larger clinical trials are justified.

Keywords: broccoli glucosinolates, glucoraphanin, germinated seeds, myrosinase, adenosine triphosphate

Procedia PDF Downloads 277
2808 Structured-Ness and Contextual Retrieval Underlie Language Comprehension

Authors: Yao-Ying Lai, Maria Pinango, Ashwini Deo

Abstract:

While grammatical devices are essential to language processing, how comprehension utilizes cognitive mechanisms is less emphasized. This study addresses this issue by probing the complement coercion phenomenon: an entity-denoting complement following verbs like begin and finish receives an eventive interpretation. For example, (1) “The queen began the book” receives an agentive reading like (2) “The queen began [reading/writing/etc.…] the book.” Such sentences engender additional processing cost in real-time comprehension. The traditional account attributes this cost to an operation that coerces the entity-denoting complement to an event, assuming that these verbs require eventive complements. However, in closer examination, examples like “Chapter 1 began the book” undermine this assumption. An alternative, Structured Individual (SI) hypothesis, proposes that the complement following aspectual verbs (AspV; e.g. begin, finish) is conceptualized as a structured individual, construed as an axis along various dimensions (e.g. spatial, eventive, temporal, informational). The composition of an animate subject and an AspV such as (1) engenders an ambiguity between an agentive reading along the eventive dimension like (2), and a constitutive reading along the informational/spatial dimension like (3) “[The story of the queen] began the book,” in which the subject is interpreted as a subpart of the complement denotation. Comprehenders need to resolve the ambiguity by searching contextual information, resulting in additional cost. To evaluate the SI hypothesis, a questionnaire was employed. Method: Target AspV sentences such as “Shakespeare began the volume.” were preceded by one of the following types of context sentence: (A) Agentive-biasing, in which an event was mentioned (…writers often read…), (C) Constitutive-biasing, in which a constitutive meaning was hinted (Larry owns collections of Renaissance literature.), (N) Neutral context, which allowed both interpretations. Thirty-nine native speakers of English were asked to (i) rate each context-target sentence pair from a 1~5 scale (5=fully understandable), and (ii) choose possible interpretations for the target sentence given the context. The SI hypothesis predicts that comprehension is harder for the Neutral condition, as compared to the biasing conditions because no contextual information is provided to resolve an ambiguity. Also, comprehenders should obtain the specific interpretation corresponding to the context type. Results: (A) Agentive-biasing and (C) Constitutive-biasing were rated higher than (N) Neutral conditions (p< .001), while all conditions were within the acceptable range (> 3.5 on the 1~5 scale). This suggests that when lacking relevant contextual information, semantic ambiguity decreases comprehensibility. The interpretation task shows that the participants selected the biased agentive/constitutive reading for condition (A) and (C) respectively. For the Neutral condition, the agentive and constitutive readings were chosen equally often. Conclusion: These findings support the SI hypothesis: the meaning of AspV sentences is conceptualized as a parthood relation involving structured individuals. We argue that semantic representation makes reference to spatial structured-ness (abstracted axis). To obtain an appropriate interpretation, comprehenders utilize contextual information to enrich the conceptual representation of the sentence in question. This study connects semantic structure to human’s conceptual structure, and provides a processing model that incorporates contextual retrieval.

Keywords: ambiguity resolution, contextual retrieval, spatial structured-ness, structured individual

Procedia PDF Downloads 315
2807 Comparison of Processing Conditions for Plasticized PVC and PVB

Authors: Michael Tupý, Jaroslav Císař, Pavel Mokrejš, Dagmar Měřínská, Alice Tesaříková-Svobodová

Abstract:

The worldwide problem is that the recycled PVB is wildly stored in landfills. However, PVB have very similar chemical properties such as PVC. Moreover, both of them are used in plasticized form. Thus, the thermal properties of plasticized PVC obtained from primary production and the PVB was obtained by recycling of windshields are compared. It is carried out in order to find degradable conditions and decide if blend of PVB/PVC can be processable together. Tested PVC contained 38 % of plasticizer diisononyl phthalate (DINP) and PVB was plasticized with 28 % of triethylene glycol, bis(2-ethylhexanoate) (3GO). Thermal and thermo-oxidative decomposition of both vinyl polymers are compared such as DSC and OOT analysis. The tensile strength analysis is added.

Keywords: polyvinyl chloride, polyvinyl butyral, recycling, reprocessing, thermal analysis, decomposition

Procedia PDF Downloads 494
2806 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 122
2805 Implementation of Iterative Algorithm for Earthquake Location

Authors: Hussain K. Chaiel

Abstract:

The development in the field of the digital signal processing (DSP) and the microelectronics technology reduces the complexity of the iterative algorithms that need large number of arithmetic operations. Virtex-Field Programmable Gate Arrays (FPGAs) are programmable silicon foundations which offer an important solution for addressing the needs of high performance DSP designer. In this work, Virtex-7 FPGA technology is used to implement an iterative algorithm to estimate the earthquake location. Simulation results show that an implementation based on block RAMB36E1 and DSP48E1 slices of Virtex-7 type reduces the number of cycles of the clock frequency. This enables the algorithm to be used for earthquake prediction.

Keywords: DSP, earthquake, FPGA, iterative algorithm

Procedia PDF Downloads 372
2804 Oleic Acid Enhances Hippocampal Synaptic Efficacy

Authors: Rema Vazhappilly, Tapas Das

Abstract:

Oleic acid is a cis unsaturated fatty acid and is known to be a partially essential fatty acid due to its limited endogenous synthesis during pregnancy and lactation. Previous studies have demonstrated the role of oleic acid in neuronal differentiation and brain phospholipid synthesis. These evidences indicate a major role for oleic acid in learning and memory. Interestingly, oleic acid has been shown to enhance hippocampal long term potentiation (LTP), the physiological correlate of long term synaptic plasticity. However the effect of oleic acid on short term synaptic plasticity has not been investigated. Short term potentiation (STP) is the physiological correlate of short term synaptic plasticity which is the key underlying molecular mechanism of short term memory and neuronal information processing. STP in the hippocampal CA1 region has been known to require the activation of N-methyl-D-aspartate receptors (NMDARs). The NMDAR dependent hippocampal STP as a potential mechanism for short term memory has been a subject of intense interest for the past few years. Therefore in the present study the effect of oleic acid on NMDAR dependent hippocampal STP was determined in mouse hippocampal slices (in vitro) using Multi-electrode array system. STP was induced by weak tetanic Stimulation (one train of 100 Hz stimulations for 0.1s) of the Schaffer collaterals of CA1 region of the hippocampus in slices treated with different concentrations of oleic acid in presence or absence of NMDAR antagonist D-AP5 (30 µM) . Oleic acid at 20 (mean increase in fEPSP amplitude = ~135 % Vs. Control = 100%; P<0.001) and 30 µM (mean increase in fEPSP amplitude = ~ 280% Vs. Control = 100%); P<0.001) significantly enhanced the STP following weak tetanic stimulation. Lower oleic acid concentrations at 10 µM did not modify the hippocampal STP induced by weak tetanic stimulation. The hippocampal STP induced by weak tetanic stimulation was completely blocked by the NMDA receptor antagonist D-AP5 (30µM) in both oleic acid and control treated hippocampal slices. This lead to the conclusion that the hippocampal STP elicited by weak tetanic stimulation and enhanced by oleic acid was NMDAR dependent. Together these findings suggest that oleic acid may enhance the short term memory and neuronal information processing through the modulation of NMDAR dependent hippocampal short-term synaptic plasticity. In conclusion this study suggests the possible role of oleic acid to prevent the short term memory loss and impaired neuronal function throughout development.

Keywords: oleic acid, short-term potentiation, memory, field excitatory post synaptic potentials, NMDA receptor

Procedia PDF Downloads 319
2803 Agenesis of the Corpus Callosum: The Role of Neuropsychological Assessment with Implications to Psychosocial Rehabilitation

Authors: Ron Dick, P. S. D. V. Prasadarao, Glenn Coltman

Abstract:

Agenesis of the corpus callosum (ACC) is a failure to develop corpus callosum - the large bundle of fibers of the brain that connects the two cerebral hemispheres. It can occur as a partial or complete absence of the corpus callosum. In the general population, its estimated prevalence rate is 1 in 4000 and a wide range of genetic, infectious, vascular, and toxic causes have been attributed to this heterogeneous condition. The diagnosis of ACC is often achieved by neuroimaging procedures. Though persons with ACC can perform normally on intelligence tests they generally present with a range of neuropsychological and social deficits. The deficit profile is characterized by poor coordination of motor movements, slow reaction time, processing speed and, poor memory. Socially, they present with deficits in communication, language processing, the theory of mind, and interpersonal relationships. The present paper illustrates the role of neuropsychological assessment with implications to psychosocial management in a case of agenesis of the corpus callosum. Method: A 27-year old left handed Caucasian male with a history of ACC was self-referred for a neuropsychological assessment to assist him in his employment options. Parents noted significant difficulties with coordination and balance at an early age of 2-3 years and he was diagnosed with dyspraxia at the age of 14 years. History also indicated visual impairment, hypotonia, poor muscle coordination, and delayed development of motor milestones. MRI scan indicated agenesis of the corpus callosum with ventricular morphology, widely spaced parallel lateral ventricles and mild dilatation of the posterior horns; it also showed colpocephaly—a disproportionate enlargement of the occipital horns of the lateral ventricles which might be affecting his motor abilities and visual defects. The MRI scan ruled out other structural abnormalities or neonatal brain injury. At the time of assessment, the subject presented with such problems as poor coordination, slowed processing speed, poor organizational skills and time management, and difficulty with social cues and facial expressions. A comprehensive neuropsychological assessment was planned and conducted to assist in identifying the current neuropsychological profile to facilitate the formulation of a psychosocial and occupational rehabilitation programme. Results: General intellectual functioning was within the average range and his performance on memory-related tasks was adequate. Significant visuospatial and visuoconstructional deficits were evident across tests; constructional difficulties were seen in tasks such as copying a complex figure, building a tower and manipulating blocks. Poor visual scanning ability and visual motor speed were evident. Socially, the subject reported heightened social anxiety, difficulty in responding to cues in the social environment, and difficulty in developing intimate relationships. Conclusion: Persons with ACC are known to present with specific cognitive deficits and problems in social situations. Findings from the current neuropsychological assessment indicated significant visuospatial difficulties, poor visual scanning and problems in social interactions. His general intellectual functioning was within the average range. Based on the findings from the comprehensive neuropsychological assessment, a structured psychosocial rehabilitation programme was developed and recommended.

Keywords: agenesis, callosum, corpus, neuropsychology, psychosocial, rehabilitation

Procedia PDF Downloads 266
2802 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 14
2801 Differential Approach to Technology Aided English Language Teaching: A Case Study in a Multilingual Setting

Authors: Sweta Sinha

Abstract:

Rapid evolution of technology has changed language pedagogy as well as perspectives on language use, leading to strategic changes in discourse studies. We are now firmly embedded in a time when digital technologies have become an integral part of our daily lives. This has led to generalized approaches to English Language Teaching (ELT) which has raised two-pronged concerns in linguistically diverse settings: a) the diverse linguistic background of the learner might interfere/ intervene with the learning process and b) the differential level of already acquired knowledge of target language might make the classroom practices too easy or too difficult for the target group of learners. ELT needs a more systematic and differential pedagogical approach for greater efficiency and accuracy. The present research analyses the need of identifying learner groups based on different levels of target language proficiency based on a longitudinal study done on 150 undergraduate students. The learners were divided into five groups based on their performance on a twenty point scale in Listening Speaking Reading and Writing (LSRW). The groups were then subjected to varying durations of technology aided language learning sessions and their performance was recorded again on the same scale. Identifying groups and introducing differential teaching and learning strategies led to better results compared to generalized teaching strategies. Language teaching includes different aspects: the organizational, the technological, the sociological, the psychological, the pedagogical and the linguistic. And a facilitator must account for all these aspects in a carefully devised differential approach meeting the challenge of learner diversity. Apart from the justification of the formation of differential groups the paper attempts to devise framework to account for all these aspects in order to make ELT in multilingual setting much more effective.

Keywords: differential groups, English language teaching, language pedagogy, multilingualism, technology aided language learning

Procedia PDF Downloads 378
2800 Incidence of Fungal Infections and Mycotoxicosis in Pork Meat and Pork By-Products in Egyptian Markets

Authors: Ashraf Samir Hakim, Randa Mohamed Alarousy

Abstract:

The consumption of food contaminated with molds (microscopic filamentous fungi) and their toxic metabolites results in the development of food-borne mycotoxicosis. The spores of molds are ubiquitously spread in the environment and can be detected everywhere. Ochratoxin A is a potentially carcinogenic fungal toxin found in a variety of food commodities , not only is considered the most abundant and hence the most commonly detected member but also is the most toxic one.Ochratoxin A is the most abundant and hence the most commonly detected member, but is also the most toxic of the three. A very limited research works concerning foods of porcine origin in Egypt were obtained in spite of presence a considerable swine population and consumers. In this study, the quality of various ready-to-eat local and imported pork meat and meat byproducts sold in Egyptian markets as well as edible organs as liver and kidney were assessed for the presence of various molds and their toxins as a raw material. Mycological analysis was conducted on (n=110) samples which included pig livers n=10 and kidneys n=10 from the Basateen slaughter house; local n=70 and 20 imported processed pork meat byproducts.The isolates were identified using traditional mycological and biochemical tests while, Ochratoxin A levels were quantitatively analyzed using the high performance liquid. Results of conventional mycological tests for detecting the presence of fungal growth (yeasts or molds) were negative, while the results of mycotoxins concentrations were be greatly above the permiceable limits or "tolerable weekly intake" (TWI) of ochratoxin A established by EFSA in 2006 in local pork and pork byproducts while the imported samples showed a very slightly increasing.Since ochratoxin A is stable and generally resistant to heat and processing, control of ochratoxin A contamination lies in the control of the growth of the toxin-producing fungi. Effective prevention of ochratoxin A contamination therefore depends on good farming and agricultural practices. Good Agricultural Practices (GAP) including methods to reduce fungal infection and growth during harvest, storage, transport and processing provide the primary line of defense against contamination with ochratoxin A. To the best of our knowledge this is the first report of mycological assessment, especially the mycotoxins in pork byproducts in Egypt.

Keywords: Egyptian markets, mycotoxicosis, ochratoxin A, pork meat, pork by-products

Procedia PDF Downloads 451
2799 Denoising of Magnetotelluric Signals by Filtering

Authors: Rodrigo Montufar-Chaveznava, Fernando Brambila-Paz, Ivette Caldelas

Abstract:

In this paper, we present the advances corresponding to the denoising processing of magnetotelluric signals using several filters. In particular, we use the most common spatial domain filters such as median and mean, but we are also using the Fourier and wavelet transform for frequency domain filtering. We employ three datasets obtained at the different sampling rate (128, 4096 and 8192 bps) and evaluate the mean square error, signal-to-noise relation, and peak signal-to-noise relation to compare the kernels and determine the most suitable for each case. The magnetotelluric signals correspond to earth exploration when water is searched. The object is to find a denoising strategy different to the one included in the commercial equipment that is employed in this task.

Keywords: denoising, filtering, magnetotelluric signals, wavelet transform

Procedia PDF Downloads 349
2798 Moving beyond the Social Model of Disability by Engaging in Anti-Oppressive Social Work Practice

Authors: Irene Carter, Roy Hanes, Judy MacDonald

Abstract:

Considering that disability is universal and people with disabilities are part of all societies; that there is a connection between the disabled individual and the societal; and that it is society and social arrangements that disable people with impairments, contemporary disability discourse emphasizes the social model of disability to counter medical and rehabilitative models of disability. However, the social model does not go far enough in addressing the issues of oppression and inclusion. The authors indicate that the social model does not specifically or adequately denote the oppression of persons with disabilities, which is a central component of progressive social work practice with people with disabilities. The social model of disability does not go far enough in deconstructing disability and offering social workers, as well as people with disabilities a way of moving forward in terms of practice anchored in individual, familial and societal change. The social model of disability is expanded by incorporating principles of anti-oppression social work practice. Although the contextual analysis of the social model of disability is an important component there remains a need for social workers to provide service to individuals and their families, which will be illustrated through anti-oppressive practice (AOP). By applying an anti-oppressive model of practice to the above definitions, the authors not only deconstruct disability paradigms but illustrate how AOP offers a framework for social workers to engage with people with disabilities at the individual, familial and community levels of practice, promoting an emancipatory focus in working with people with disabilities. An anti- social- oppression social work model of disability connects the day-to-day hardships of people with disabilities to the direct consequence of oppression in the form of ableism. AOP theory finds many of its basic concepts within social-oppression theory and the social model of disability. It is often the case that practitioners, including social workers and psychologists, define people with disabilities’ as having or being a problem with the focus placed upon adjustment and coping. A case example will be used to illustrate how an AOP paradigm offers social work a more comprehensive and critical analysis and practice model for social work practice with and for people with disabilities than the traditional medical model, rehabilitative and social model approaches.

Keywords: anti-oppressive practice, disability, people with disabilities, social model of disability

Procedia PDF Downloads 1039
2797 The Importance of Artificial Intelligence in Various Healthcare Applications

Authors: Joshna Rani S., Ahmadi Banu

Abstract:

Artificial Intelligence (AI) has a significant task to carry out in the medical care contributions of things to come. As AI, it is the essential capacity behind the advancement of accuracy medication, generally consented to be a painfully required development in care. Albeit early endeavors at giving analysis and treatment proposals have demonstrated testing, we anticipate that AI will at last dominate that area too. Given the quick propels in AI for imaging examination, it appears to be likely that most radiology, what's more, pathology pictures will be inspected eventually by a machine. Discourse and text acknowledgment are now utilized for assignments like patient correspondence and catch of clinical notes, and their utilization will increment. The best test to AI in these medical services areas isn't regardless of whether the innovations will be sufficiently skilled to be valuable, but instead guaranteeing their appropriation in day by day clinical practice. For far reaching selection to happen, AI frameworks should be affirmed by controllers, coordinated with EHR frameworks, normalized to an adequate degree that comparative items work likewise, instructed to clinicians, paid for by open or private payer associations, and refreshed over the long haul in the field. These difficulties will, at last, be survived, yet they will take any longer to do as such than it will take for the actual innovations to develop. Therefore, we hope to see restricted utilization of AI in clinical practice inside 5 years and more broad use inside 10 years. It likewise appears to be progressively evident that AI frameworks won't supplant human clinicians for a huge scope, yet rather will increase their endeavors to really focus on patients. Over the long haul, human clinicians may advance toward errands and work plans that draw on remarkably human abilities like sympathy, influence, and higher perspective mix. Maybe the lone medical services suppliers who will chance their professions over the long run might be the individuals who will not work close by AI

Keywords: artificial intellogence, health care, breast cancer, AI applications

Procedia PDF Downloads 165
2796 How Autonomous Vehicles Transform Urban Policies and Cities

Authors: Adrián P. Gómez Mañas

Abstract:

Autonomous vehicles have already transformed urban policies and cities. This is the main assumption of our research, which aims to understand how the representations of the possible arrival of autonomous vehicles already transform priorities or actions in transport and more largely, urban policies. This research is done within the framework of a Ph.D. doctorate directed by Professor Xavier Desjardins at the Sorbonne University of Paris. Our hypotheses are: (i) the perspectives, representations, and imaginaries on autonomous vehicles already affect the stakeholders of urban policies; (ii) the discourses on the opportunities or threats of autonomous vehicles reflect the current strategies of the stakeholders. Each stakeholder tries to integrate a discourse on autonomous vehicles that allows them to change as little as possible their current tactics and strategies. The objective is to eventually make a comparison between three different cases: Paris, United Arab Emirates, and Bogota. We chose those territories because their contexts are very different, but they all have important interests in mobility and innovation, and they all have started to reflect on the subject of self-driving mobility. The main methodology used is to interview actors of the metropolitan area (local officials, leading urban and transport planners, influent experts, and private companies). This work is supplemented with conferences, official documents, press articles, and websites. The objective is to understand: 1) What they know about autonomous vehicles and where does their knowledge come from; 2) What they expect from autonomous vehicles; 3) How their ideas about autonomous vehicles are transforming their action and strategy in managing daily mobility, investing in transport, designing public spaces and urban planning. We are going to present the research and some preliminary results; we will show that autonomous vehicles are often viewed by public authorities as a lever to reach something else. We will also present that speeches are very influenced by local context (political, geographical, economic, etc.), creating an interesting balance between global and local influences. We will analyze the differences and similarities between the three cases and will try to understand which are the causes.

Keywords: autonomous vehicles, self-driving mobility, urban planning, urban mobility, transport, public policies

Procedia PDF Downloads 176
2795 Central African Republic Government Recruitment Agency Based on Identity Management and Public Key Encryption

Authors: Koyangbo Guere Monguia Michel Alex Emmanuel

Abstract:

In e-government and especially recruitment, many researches have been conducted to build a trustworthy and reliable online or application system capable to process users or job applicant files. In this research (Government Recruitment Agency), cloud computing, identity management and public key encryption have been used to management domains, access control authorization mechanism and to secure data exchange between entities for reliable procedure of processing files.

Keywords: cloud computing network, identity management systems, public key encryption, access control and authorization

Procedia PDF Downloads 343
2794 Design and Implementation of an Image Based System to Enhance the Security of ATM

Authors: Seyed Nima Tayarani Bathaie

Abstract:

In this paper, an image-receiving system was designed and implemented through optimization of object detection algorithms using Haar features. This optimized algorithm served as face and eye detection separately. Then, cascading them led to a clear image of the user. Utilization of this feature brought about higher security by preventing fraud. This attribute results from the fact that services will be given to the user on condition that a clear image of his face has already been captured which would exclude the inappropriate person. In order to expedite processing and eliminating unnecessary ones, the input image was compressed, a motion detection function was included in the program, and detection window size was confined.

Keywords: face detection algorithm, Haar features, security of ATM

Procedia PDF Downloads 399
2793 Grey Prediction of Atmospheric Pollutants in Shanghai Based on GM(1,1) Model Group

Authors: Diqin Qi, Jiaming Li, Siman Li

Abstract:

Based on the use of the three-point smoothing method for selectively processing original data columns, this paper establishes a group of grey GM(1,1) models to predict the concentration ranges of four major air pollutants in Shanghai from 2023 to 2024. The results indicate that PM₁₀, SO₂, and NO₂ maintain the national Grade I standards, while the concentration of PM₂.₅ has decreased but still remains within the national Grade II standards. Combining the forecast results, recommendations are provided for the Shanghai municipal government's efforts in air pollution prevention and control.

Keywords: atmospheric pollutant prediction, Grey GM(1, 1), model group, three-point smoothing method

Procedia PDF Downloads 23
2792 Effect of High-Pressure and Thermal Treatments on Quality Markers of Strawberry Nectars

Authors: Karen Louise Lacey, Dario Javier Pavon Vargas, Massimiliano Rinaldi, Luca Cattani, Sara Rainieri

Abstract:

The effects of high-pressure processing (HPP) and thermal treatments (TT) on quality markers of strawberry nectar (12 °Brix, 3,3 pH) was studied before and after treatments. TT and HPP treatments ensured a 3-log aerobic bacteria inactivation. No significant difference was detected in terms of pH and °Brix. TT samples were less red (a* less positive) than all HPP treated samples, while all samples were less red than the control. Apparent viscosity was significantly increased in all the HPP treatments, at 10 1/s shear rate, control was 79.04±7.94 mPa•s and the 600 MPa-20 min treatment were 327.10±1.64 mPa•s. This work suggests that HPP treatments may maintain the quality markers of strawberry nectar better.

Keywords: HPP, strawberry nectar, colour , viscosity

Procedia PDF Downloads 110
2791 Consumer Load Profile Determination with Entropy-Based K-Means Algorithm

Authors: Ioannis P. Panapakidis, Marios N. Moschakis

Abstract:

With the continuous increment of smart meter installations across the globe, the need for processing of the load data is evident. Clustering-based load profiling is built upon the utilization of unsupervised machine learning tools for the purpose of formulating the typical load curves or load profiles. The most commonly used algorithm in the load profiling literature is the K-means. While the algorithm has been successfully tested in a variety of applications, its drawback is the strong dependence in the initialization phase. This paper proposes a novel modified form of the K-means that addresses the aforementioned problem. Simulation results indicate the superiority of the proposed algorithm compared to the K-means.

Keywords: clustering, load profiling, load modeling, machine learning, energy efficiency and quality

Procedia PDF Downloads 144
2790 Reasons for Food Losses and Waste in Basic Production of Meat Sector in Poland

Authors: Sylwia Laba, Robert Laba, Krystian Szczepanski, Mikolaj Niedek, Anna Kaminska-Dworznicka

Abstract:

Meat and its products are considered food products, having the most unfavorable effect on the environment that requires rational management of these products and waste, originating throughout the whole chain of manufacture, processing, transport, and trade of meat. From the economic and environmental viewpoints, it is important to limit the losses and food wastage and the food waste in the whole meat sector. The link to basic production includes obtaining raw meat, i.e., animal breeding, management, and transport of animals to the slaughterhouse. Food is any substance or product, intended to be consumed by humans. It was determined (for the needs of the present studies) when the raw material is considered as a food. It is the moment when the animals are prepared to loading with the aim to be transported to a slaughterhouse and utilized for food purposes. The aim of the studies was to determine the reasons for loss generation in the basic production of the meat sector in Poland during the years 2017 – 2018. The studies on food losses and waste in the meat sector in basic production were carried out in two areas: red meat i.e., pork and beef and poultry meat. The studies of basic production were conducted in the period of March-May 2019 at the territory of the whole country on a representative trial of 278 farms, including 102 pork production, 55–beef production, and 121 poultry meat production. The surveys were carried out with the utilization of questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted direct questionnaire interviews. Research results indicate that it is followed that any losses were not recorded during the preparation, loading, and transport of the animals to the slaughterhouse in 33% of the visited farms. In the farms where the losses were indicated, the crushing and suffocations, occurring during the production of pigs, beef cattle and poultry, were the main reasons for these losses. They constituted ca. 40% of the reported reasons. The stress generated by loading and transport caused 16 – 17% (depending on the season of the year) of the loss reasons. In the case of poultry production, in 2017, additionally, 10.7% of losses were caused by inappropriate conditions of loading and transportation, while in 2018 – 11.8%. The diseases were one of the reasons for the losses in pork and beef production (7% of the losses). The losses and waste, generated during livestock production and in meat processing and trade cannot be managed or recovered. They have to be disposed of. It is, therefore, important to prevent and minimize the losses throughout the whole production chain. It is possible to introduce the appropriate measures, connected mainly with the appropriate conditions and methods of animal loading and transport.

Keywords: food losses, food waste, livestock production, meat sector

Procedia PDF Downloads 127
2789 Assessing Supply Chain Performance through Data Mining Techniques: A Case of Automotive Industry

Authors: Emin Gundogar, Burak Erkayman, Nusret Sazak

Abstract:

Providing effective management performance through the whole supply chain is critical issue and hard to applicate. The proper evaluation of integrated data may conclude with accurate information. Analysing the supply chain data through OLAP (On-Line Analytical Processing) technologies may provide multi-angle view of the work and consolidation. In this study, association rules and classification techniques are applied to measure the supply chain performance metrics of an automotive manufacturer in Turkey. Main criteria and important rules are determined. The comparison of the results of the algorithms is presented.

Keywords: supply chain performance, performance measurement, data mining, automotive

Procedia PDF Downloads 497
2788 Management Information System to Help Managers for Providing Decision Making in an Organization

Authors: Ajayi Oluwasola Felix

Abstract:

Management information system (MIS) provides information for the managerial activities in an organization. The main purpose of this research is, MIS provides accurate and timely information necessary to facilitate the decision-making process and enable the organizations planning control and operational functions to be carried out effectively. Management information system (MIS) is basically concerned with processing data into information and is then communicated to the various departments in an organization for appropriate decision-making. MIS is a subset of the overall planning and control activities covering the application of humans technologies, and procedures of the organization. The information system is the mechanism to ensure that information is available to the managers in the form they want it and when they need it.

Keywords: Management Information Systems (MIS), information technology, decision-making, MIS in Organizations

Procedia PDF Downloads 538
2787 Listening to the Voices of Teachers Who Are Dyslexic: The Careers, Professional Development, and Strategies Used by of Teachers Who Are Dyslexic

Authors: Jane Mullen

Abstract:

Little research has been undertaken on adult dyslexia and the impact it has on those who have professional careers. There are many complexities behind the career decisions people make, but for teachers who are dyslexic, it can be even more complex. Dyslexia particularly impacts on written and verbal communication, as well as planning and organisation skills which are essential skills for a teacher. As the teachers are aware of their areas of weakness many, make the conscious decision not to disclose their disability at work. In England, the reduction to three attempts to pass the compulsory English and Maths tests prior to undertaking teacher training may mean that dyslexics are now excluded from trying to enter the profession. Together with the fact that dyslexic teachers often chose to remain ‘hidden’ the situation appears to be counter to the inclusive rhetoric that dominates the current educational discourse. This paper is based on in-depth narrative research that has been undertaken with a small group of teachers who are dyslexic in England and firstly explores the strategies and resources that the teachers have found useful. The narratives of the teachers are full of difficulties as well as diversity, consequently, the paper secondly examines how life experiences have impacted on the way the teachers see their dyslexia and how it affects them professionally. Using a narrative methodology enables the teachers to tell their ‘stories’ of how they feel their dyslexia impacts on their lives professionally. The first interview centred around a limited number of semi structured questions about family background, educational experiences, career development, management roles and professional disclosure. The second interview focused on the complexities of being a teacher who is dyslexic and to ‘unlock’ some of their work based narratives visual elicitation was used. Photographs of work-based strategies, issues or concerns were sent to the researcher and these were used as the basis for discussion in the second interview. The paper concludes by discussing possible reasonable adjustments and professional development that might benefit teachers who are dyslexic.

Keywords: dyslexia, life history, narrative, professional, professional development, strategies, teachers

Procedia PDF Downloads 206
2786 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 56
2785 Fractional Residue Number System

Authors: Parisa Khoshvaght, Mehdi Hosseinzadeh

Abstract:

During the past few years, the Residue Number System (RNS) has been receiving considerable interest due to its parallel and fault-tolerant properties. This system is a useful tool for Digital Signal Processing (DSP) since it can support parallel, carry-free, high-speed and low power arithmetic. One of the drawbacks of Residue Number System is the fractional numbers, that is, the corresponding circuit is very hard to realize in conventional CMOS technology. In this paper, we propose a method in which the numbers of transistors are significantly reduced. The related delay is extremely diminished, in the first glance we use this method to solve concerning problem of one decimal functional number some how this proposition can be extended to generalize the idea. Another advantage of this method is the independency on the kind of moduli.

Keywords: computer arithmetic, residue number system, number system, one-Hot, VLSI

Procedia PDF Downloads 484