Search results for: nonuniform signal processing
3583 Coarse-Grained Computational Fluid Dynamics-Discrete Element Method Modelling of the Multiphase Flow in Hydrocyclones
Authors: Li Ji, Kaiwei Chu, Shibo Kuang, Aibing Yu
Abstract:
Hydrocyclones are widely used to classify particles by size in industries such as mineral processing and chemical processing. The particles to be handled usually have a broad range of size distributions and sometimes density distributions, which has to be properly considered, causing challenges in the modelling of hydrocyclone. The combined approach of Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) offers convenience to model particle size/density distribution. However, its direct application to hydrocyclones is computationally prohibitive because there are billions of particles involved. In this work, a CFD-DEM model with the concept of the coarse-grained (CG) model is developed to model the solid-fluid flow in a hydrocyclone. The DEM is used to model the motion of discrete particles by applying Newton’s laws of motion. Here, a particle assembly containing a certain number of particles with same properties is treated as one CG particle. The CFD is used to model the liquid flow by numerically solving the local-averaged Navier-Stokes equations facilitated with the Volume of Fluid (VOF) model to capture air-core. The results are analyzed in terms of fluid and solid flow structures, and particle-fluid, particle-particle and particle-wall interaction forces. Furthermore, the calculated separation performance is compared with the measurements. The results obtained from the present study indicate that this approach can offer an alternative way to examine the flow and performance of hydrocyclonesKeywords: computational fluid dynamics, discrete element method, hydrocyclone, multiphase flow
Procedia PDF Downloads 4053582 Exploring Language Attrition Through Processing: The Case of Mising Language in Assam
Authors: Chumki Payun, Bidisha Som
Abstract:
The Mising language, spoken by the Mising community in Assam, belongs to the Tibeto-Burman family of languages. This is one of the smaller languages of the region and is facing endangerment due to the dominance of the larger languages, like Assamese. The language is spoken in close in-group scenarios and is gradually losing ground to the dominant languages, partly also due to the education setup where schools use only dominant languages. While there are a number of factors for the current contemporary status of the language, and those can be studied using sociolinguistic tools, the current work aims to contribute to the understanding of language attrition through language processing in order to establish if the effect of second language dominance is more than mere ‘usage’ patterns and has an impact on cognitive strategies. When bilingualism spreads widely in society and results in a language shift, speakers perform people often do better in their second language (L2) than in their first language (L1) across a variety of task settings, in both comprehension and production tasks. This phenomenon was investigated in the case of Mising-Assamese bilinguals, using a picture naming task, in two districts of Jorhat and Tinsukia in Assam, where the relative dominance of L2 is slightly different. This explorative study aimed to investigate if the L2 dominance is visible in their performance and also if the pattern is different in the two different places, thus pointing to the degree of language loss in this case. The findings would have implications for native language education, as education in one’s mother tongue can help reverse the effect of language attrition helping preserve the traditional knowledge system. The hypothesis was that due to the dominance of the L2, subjects’ performance in the task would be better in Assamese than that of Missing. The experiment: Mising-Assamese bilingual participants (age ranges 21-31; N= 20 each from both districts) had to perform a picture naming task in which participants were shown pictures of familiar objects and asked to name them in four scenarios: (a) only in Mising; (b) only in Assamese; (c) a cued mix block: an auditory cue determines the language in which to name the object, and (d) non-cued mix block: participants are not given any specific language cues, but instructed to name the pictures in whichever language they feel most comfortable. The experiment was designed and executed using E-prime 3.0 and was conducted responses were recorded using the help of a Chronos response box and was recorded with the help of a recorder. Preliminary analysis reveals the presence of dominance of L2 over L1. The paper will present a comparison of the response latency, error analysis, and switch cost in L1 and L2 and explain the same from the perspective of language attrition.Keywords: bilingualism, language attrition, language processing, Mising language.
Procedia PDF Downloads 193581 Application of Compressed Sensing Method for Compression of Quantum Data
Authors: M. Kowalski, M. Życzkowski, M. Karol
Abstract:
Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.Keywords: quantum key distribution systems, fiber optic system, compressed sensing
Procedia PDF Downloads 6903580 Enhanced Near-Infrared Upconversion Emission Based Lateral Flow Immunoassay for Background-Free Detection of Avian Influenza Viruses
Authors: Jaeyoung Kim, Heeju Lee, Huijin Jung, Heesoo Pyo, Seungki Kim, Joonseok Lee
Abstract:
Avian influenza viruses (AIV) are the primary cause of highly contagious respiratory diseases caused by type A influenza viruses of the Orthomyxoviridae family. AIV are categorized on the basis of types of surface glycoproteins such as hemagglutinin and neuraminidase. Certain H5 and H7 subtypes of AIV have evolved to the high pathogenic avian influenza (HPAI) virus, which has caused considerable economic loss to the poultry industry and led to severe public health crisis. Several commercial kits have been developed for on-site detection of AIV. However, the sensitivity of these methods is too low to detect low virus concentrations in clinical samples and opaque stool samples. Here, we introduced a background-free near-infrared (NIR)-to-NIR upconversion nanoparticle-based lateral flow immunoassay (NNLFA) platform to yield a sensor that detects AIV within 20 minutes. Ca²⁺ ion in the shell was used to enhance the NIR-to-NIR upconversion photoluminescence (PL) emission as a heterogeneous dopant without inducing significant changes in the morphology and size of the UCNPs. In a mixture of opaque stool samples and gold nanoparticles (GNPs), which are components of commercial AIV LFA, the background signal of the stool samples mask the absorption peak of GNPs. However, UCNPs dispersed in the stool samples still show strong emission centered at 800 nm when excited at 980 nm, which enables the NNLFA platform to detect 10-times lower viral load than a commercial GNP-based AIV LFA. The detection limit of NNLFA for low pathogenic avian influenza (LPAI) H5N2 and HPAI H5N6 viruses was 10² EID₅₀/mL and 10³.⁵ EID₅₀/mL, respectively. Moreover, when opaque brown-colored samples were used as the target analytes, strong NIR emission signal from the test line in NNLFA confirmed the presence of AIV, whereas commercial AIV LFA detected AIV with difficulty. Therefore, we propose that this rapid and background-free NNLFA platform has the potential of detecting AIV in the field, which could effectively prevent the spread of these viruses at an early stage.Keywords: avian influenza viruses, lateral flow immunoassay on-site detection, upconversion nanoparticles
Procedia PDF Downloads 1633579 Successful Rehabilitation of Recalcitrant Knee Pain Due to Anterior Cruciate Ligament Injury Masked by Extensive Skin Graft: A Case Report
Authors: Geum Yeon Sim, Tyler Pigott, Julio Vasquez
Abstract:
A 38-year-old obese female with no apparent past medical history presented with left knee pain. Six months ago, she sustained a left knee dislocation in a motor vehicle accident that was managed with a skin graft over the left lower extremity without any reconstructive surgery. She developed persistent pain and stiffness in her left knee that worsened with walking and stair climbing. Examination revealed healed extensive skin graft over the left lower extremity, including the left knee. Palpation showed moderate tenderness along the superior border of the patella, exquisite tenderness over MCL, and mild tenderness on the tibial tuberosity. There was normal sensation, reflexes, and strength in her lower extremities. There was limited active and passive range of motion of her left knee during flexion. There was instability noted upon the valgus stress test of the left knee. Left knee magnetic resonance imaging showed high-grade (grade 2-3) injury of the proximal superficial fibers of the MCL and diffuse thickening and signal abnormality of the cruciate ligaments, as well as edema-like subchondral marrow signal change in the anterolateral aspect of the lateral femoral condyle weight-bearing surface. There was also notable extensive scarring and edema of the skin, subcutaneous soft tissues, and musculature surrounding the knee. The patient was managed with left knee immobilization for five months, which was complicated by limited knee flexion. Physical therapy consisting of quadriceps, hamstrings, gastrocnemius stretching and strengthening, range of motion exercises, scar/soft tissue mobilization, and gait training was given with marked improvement in pain and range of motion. The patient experienced a further reduction in pain as well as an improvement in function with home exercises consisting of continued strengthening and stretching.Keywords: ligamentous injury, trauma, rehabilitation, knee pain
Procedia PDF Downloads 1063578 A Gradient Orientation Based Efficient Linear Interpolation Method
Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar
Abstract:
This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing
Procedia PDF Downloads 2583577 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers
Authors: Shreyas Srinivas Rangan, Jurgis Porins
Abstract:
The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers
Procedia PDF Downloads 683576 Trust: The Enabler of Knowledge-Sharing Culture in an Informal Setting
Authors: Emmanuel Ukpe, S. M. F. D. Syed Mustapha
Abstract:
Trust in an organization has been perceived as one of the key factors behind knowledge sharing, mainly in an unstructured work environment. In an informal working environment, to instill trust among individuals is a challenge and even more in the virtual environment. The study has contributed in developing the framework for building trust in an unstructured organization in performing knowledge sharing in a virtual environment. The artifact called KAPE (Knowledge Acquisition, Processing, and Exchange) was developed for knowledge sharing for the informal organization where the framework was incorporated. It applies to Cassava farmers to facilitate knowledge sharing using web-based platform. A survey was conducted; data were collected from 382 farmers from 21 farm communities. Multiple regression technique, Cronbach’s Alpha reliability test; Tukey’s Honestly significant difference (HSD) analysis; one way Analysis of Variance (ANOVA), and all trust acceptable measures (TAM) were used to test the hypothesis and to determine noteworthy relationships. The results show a significant difference when there is a trust in knowledge sharing between farmers, the ones who have high in trust acceptable factors found in the model (M = 3.66 SD = .93) and the ones who have low on trust acceptable factors (M = 2.08 SD = .28), (t (48) = 5.69, p = .00). Furthermore, when applying Cognitive Expectancy Theory, the farmers with cognitive-consonance show higher level of trust and satisfaction with knowledge and information from KAPE, as compared with a low level of cognitive-dissonance. These results imply that the adopted trust model KAPE positively improved knowledge sharing activities in an informal environment amongst rural farmers.Keywords: trust, knowledge, sharing, knowledge acquisition, processing and exchange, KAPE
Procedia PDF Downloads 1193575 Modern Agriculture and Industrialization Nexus in the Nigerian Context
Authors: Ese Urhie, Olabisi Popoola, Obindah Gershon, Olabanji Ewetan
Abstract:
Modern agriculture involves the use of improved tools and equipment (instead of crude and ineffective tools) like tractors, hand operated planters, hand operated fertilizer drills and combined harvesters - which increase agricultural productivity. Farmers in Nigeria still have huge potentials to enhance their productivity. The study argues that the increase in agricultural output due to increased productivity, orchestrated by modern agriculture will promote forward linkages and opportunities in the processing sub-sector; both the manufacturing of machines and the processing of raw materials. Depending on existing incentives, foreign investment could be attracted to augment local investment in the sector. The availability of raw materials in large quantity – which prices are competitive – will attract investment in other industries. In addition, potentials for backward linkages will also be created. In a nutshell, adopting the unbalanced growth theory in favour of the agricultural sector could engender industrialization in a country with untapped potentials. The paper highlights the numerous potentials of modern agriculture that are yet to be tapped in Nigeria and also provides a theoretical analysis of how the realization of such potentials could promote industrialization in the country. The study adopts the Lewis’ theory of structural–change model and Hirschman’s theory of unbalanced growth in the design of the analytical framework. The framework will be useful in empirical studies that will guide policy formulation.Keywords: modern agriculture, industrialization, structural change model, unbalanced growth
Procedia PDF Downloads 3003574 Hot Deformation Behavior and Recrystallization of Inconel 718 Superalloy under Double Cone Compression
Authors: Wang Jianguo, Ding Xiao, Liu Dong, Wang Haiping, Yang Yanhui, Hu Yang
Abstract:
The hot deformation behavior of Inconel 718 alloy was studied by uniaxial compression tests under the deformation temperature of 940~1040℃ and strain rate of 0.001-10s⁻¹. The double cone compression (DCC) tests develop strains range from 30% to the 79% strain including all intermediate values of stains at different temperature (960~1040℃). DCC tests were simulated by finite element software which shown the strain and strain rates distribution. The result shows that the peak stress level of the alloy decreased with increasing deformation temperature and decreasing strain rate, which could be characterized by a Zener-Hollomon parameter in the hyperbolic-sine equation. The characterization method of hot processing window containing recrystallization volume fraction and average grain size was proposed for double cone compression test of uniform coarse grain, mixed crystal and uniform fine grain double conical specimen in hydraulic press and screw press. The results show that uniform microstructures can be obtained by low temperature with high deformation followed by high temperature with small deformation on the hydraulic press and low temperature, medium deformation, multi-pass on the screw press. The two methods were applied in industrial forgings process, and the forgings with uniform microstructure were obtained successfully.Keywords: inconel 718 superalloy, hot processing windows, double cone compression, uniform microstructure
Procedia PDF Downloads 2173573 Synthesis and Characterization of Carboxymethyl Cellulose-Chitosan Based Composite Hydrogels for Biomedical and Non-Biomedical Applications
Abstract:
Hydrogels have attracted much academic and industrial attention due to their unique properties and potential biomedical and non-biomedical applications. Limitations on extending their applications have resulted from the synthesis of hydrogels using toxic materials and complex irreproducible processing techniques. In order to promote environmental sustainability, hydrogel efficiency, and wider application, this study focused on the synthesis of composite hydrogels matrices from an edible non-toxic crosslinker-citric acid (CA) using a simple low energy processing method based on carboxymethyl cellulose (CMC) and chitosan (CSN) natural polymers. Composite hydrogels were developed by chemical crosslinking. The results demonstrated that CMC:2CSN:CA exhibited good performance properties and super-absorbency 21× its original weight. This makes it promising for biomedical applications such as chronic wound healing and regeneration, next generation skin substitute, in situ bone regeneration and cell delivery. On the other hand, CMC:CSN:CA exhibited durable well-structured internal network with minimum swelling degrees, water absorbency, excellent gel fraction, and infra-red reflectance. These properties make it a suitable composite hydrogel matrix for warming effect and controlled and efficient release of loaded materials. CMC:2CSN:CA and CMC:CSN:CA composite hydrogels developed also exhibited excellent chemical, morphological, and thermal properties.Keywords: citric acid, fumaric acid, tartaric acid, zinc nitrate hexahydrate
Procedia PDF Downloads 1493572 Double Functionalization of Magnetic Colloids with Electroactive Molecules and Antibody for Platelet Detection and Separation
Authors: Feixiong Chen, Naoufel Haddour, Marie Frenea-Robin, Yves MéRieux, Yann Chevolot, Virginie Monnier
Abstract:
Neonatal thrombopenia occurs when the mother generates antibodies against her baby’s platelet antigens. It is particularly critical for newborns because it can cause coagulation troubles leading to intracranial hemorrhage. In this case, diagnosis must be done quickly to make platelets transfusion immediately after birth. Before transfusion, platelet antigens must be tested carefully to avoid rejection. The majority of thrombopenia (95 %) are caused by antibodies directed against Human Platelet Antigen 1a (HPA-1a) or 5b (HPA-5b). The common method for antigen platelets detection is polymerase chain reaction allowing for identification of gene sequence. However, it is expensive, time-consuming and requires significant blood volume which is not suitable for newborns. We propose to develop a point-of-care device based on double functionalized magnetic colloids with 1) antibodies specific to antigen platelets and 2) highly sensitive electroactive molecules in order to be detected by an electrochemical microsensor. These magnetic colloids will be used first to isolate platelets from other blood components, then to capture specifically platelets bearing HPA-1a and HPA-5b antigens and finally to attract them close to sensor working electrode for improved electrochemical signal. The expected advantages are an assay time lower than 20 min starting from blood volume smaller than 100 µL. Our functionalization procedure based on amine dendrimers and NHS-ester modification of initial carboxyl colloids will be presented. Functionalization efficiency was evaluated by colorimetric titration of surface chemical groups, zeta potential measurements, infrared spectroscopy, fluorescence scanning and cyclic voltammetry. Our results showed that electroactive molecules and antibodies can be immobilized successfully onto magnetic colloids. Application of a magnetic field onto working electrode increased the detected electrochemical signal. Magnetic colloids were able to capture specific purified antigens extracted from platelets.Keywords: Magnetic Nanoparticles , Electroactive Molecules, Antibody, Platelet
Procedia PDF Downloads 2693571 The Effects of Blanching, Boiling and Steaming on Ascorbic Acid Content, Total Phenolic Content, and Colour in Cauliflowers (Brassica oleracea var. Botrytis)
Authors: Huei Lin Lee, Wee Sim Choo
Abstract:
The effects of blanching, boiling and steaming on the ascorbic acid content, total phenolic content and colour in cauliflower (Brassica oleraceavar. Botrytis) was investigated. It was found that blanching was the best thermal processing to be applied on cauliflower compared to boiling and steaming processes. Blanching and steaming processes on cauliflower retained most of the ascorbic acid content (AAC) compared to those of boiling. As for the total phenolic content (TPC), blanching process retained a higher TPC in cauliflower compared to those of boiling and steaming processes. There were no significant differences between the TPC of boiled and steamed cauliflowers. As for the colour measurement, there were no significant differences in the colour of the cauliflower at different lead time (after processing to the point of consumption) of 30 minutes interval up to 3 hours but there were slight variations in L*, a*, and b* values among the thermal processed cauliflowers (blanched, boiled and steamed). The cauliflowers in this study were found to give a desirable white colour (L* value in the range of 77-83) in all the three thermal processes (blanching, boiling and steaming). There was no significant difference on the effect of lead time (30-minutes interval up to 3 hours) in raw and all the three thermal processed (blanched, boiled and steamed) cauliflowers.Keywords: ascorbic acid, cauliflower, colour, phenolics
Procedia PDF Downloads 3123570 The Effect of Development of Two-Phase Flow Regimes on the Stability of Gas Lift Systems
Authors: Khalid. M. O. Elmabrok, M. L. Burby, G. G. Nasr
Abstract:
Flow instability during gas lift operation is caused by three major phenomena – the density wave oscillation, the casing heading pressure and the flow perturbation within the two-phase flow region. This paper focuses on the causes and the effect of flow instability during gas lift operation and suggests ways to control it in order to maximise productivity during gas lift operations. A laboratory-scale two-phase flow system to study the effects of flow perturbation was designed and built. The apparatus is comprised of a 2 m long by 66 mm ID transparent PVC pipe with air injection point situated at 0.1 m above the base of the pipe. This is the point where stabilised bubbles were visibly clear after injection. Air is injected into the water filled transparent pipe at different flow rates and pressures. The behavior of the different sizes of the bubbles generated within the two-phase region was captured using a digital camera and the images were analysed using the advanced image processing package. It was observed that the average maximum bubbles sizes increased with the increase in the length of the vertical pipe column from 29.72 to 47 mm. The increase in air injection pressure from 0.5 to 3 bars increased the bubble sizes from 29.72 mm to 44.17 mm and then decreasing when the pressure reaches 4 bars. It was observed that at higher bubble velocity of 6.7 m/s, larger diameter bubbles coalesce and burst due to high agitation and collision with each other. This collapse of the bubbles causes pressure drop and reverse flow within two phase flow and is the main cause of the flow instability phenomena.Keywords: gas lift instability, bubbles forming, bubbles collapsing, image processing
Procedia PDF Downloads 4193569 Progress in Combining Image Captioning and Visual Question Answering Tasks
Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima
Abstract:
Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.Keywords: image captioning, visual question answering, deep learning, natural language processing
Procedia PDF Downloads 713568 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution
Authors: Robin Devillers, Chantal van Dijk
Abstract:
Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languagesKeywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development
Procedia PDF Downloads 973567 Explaining the Steps of Designing and Calculating the Content Validity Ratio Index of the Screening Checklist of Preschool Students (5 to 7 Years Old) Exposed to Learning Difficulties
Authors: Sajed Yaghoubnezhad, Sedygheh Rezai
Abstract:
Background and Aim: Since currently in Iran, students with learning disabilities are identified after entering school, and with the approach to the gap between IQ and academic achievement, the purpose of this study is to design and calculate the content validity of the pre-school screening checklist (5-7) exposed to learning difficulties. Methods: This research is a fundamental study, and in terms of data collection method, it is quantitative research with a descriptive approach. In order to design this checklist, after reviewing the research background and theoretical foundations, cognitive abilities (visual processing, auditory processing, phonological awareness, executive functions, spatial visual working memory and fine motor skills) are considered the basic variables of school learning. The basic items and worksheets of the screening checklist of pre-school students 5 to 7 years old with learning difficulties were compiled based on the mentioned abilities and were provided to the specialists in order to calculate the content validity ratio index. Results: Based on the results of the table, the validity of the CVR index of the background information checklist is equal to 0.9, and the CVR index of the performance checklist of preschool children (5 to7 years) is equal to 0.78. In general, the CVR index of this checklist is reported to be 0.84. The results of this study provide good evidence for the validity of the pre-school sieve screening checklist (5-7) exposed to learning difficulties.Keywords: checklist, screening, preschoolers, learning difficulties
Procedia PDF Downloads 1003566 The Implementation of an E-Government System in Developing Countries: A Case of Taita Taveta County, Kenya
Authors: Tabitha Mberi, Tirus Wanyoike, Joseph Sevilla
Abstract:
The use of Information and Communication Technology (ICT) in Government is gradually becoming a major requirement to transform delivery of services to its stakeholders by improving quality of service and efficiency. In Kenya, the devolvement of government from local authorities to county governments has resulted in many counties adopting online revenue collection systems which can be easily accessed by its stakeholders. Strathmore Research and Consortium Centre (SRCC) implemented a revenue collection system in Taita Taveta, a County in coastal Kenya. It consisted of two systems that are integrated; an online system dubbed “CountyPro” for processing county services such as Business Permit applications, General Billing, Property Rates Payments and any other revenue streams from the county. The second part was a Point of Sale(PoS) system used by the county revenue collectors to charge for market fees and vehicle parking fees. This study assesses the success and challenges in adoption of the integrated system. Qualitative and quantitative data collection methods were used to collect data on the adoption of the system with the researcher using focus groups, interviews, and questionnaires to collect data from various users of the system An analysis was carried out and revealed that 87% of the county revenue officers who are situated in county offices describe the system as efficient and has made their work easier in terms of processing of transactions for customers.Keywords: e-government, counties, information technology, online system, point of sale
Procedia PDF Downloads 2463565 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services
Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme
Abstract:
Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing
Procedia PDF Downloads 1093564 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1183563 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 453562 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 1403561 Acoustic Emission for Investigation of Processes Occurring at Hydrogenation of Metallic Titanium
Authors: Anatoly A. Kuznetsov, Pavel G. Berezhko, Sergey M. Kunavin, Eugeny V. Zhilkin, Maxim V. Tsarev, Vyacheslav V. Yaroshenko, Valery V. Mokrushin, Olga Y. Yunchina, Sergey A. Mityashin
Abstract:
The acoustic emission is caused by short-time propagation of elastic waves that are generated as a result of quick energy release from sources localized inside some material. In particular, the acoustic emission phenomenon lies in the generation of acoustic waves resulted from the reconstruction of material internal structures. This phenomenon is observed at various physicochemical transformations, in particular, at those accompanying hydrogenation processes of metals or intermetallic compounds that make it possible to study parameters of these transformations through recording and analyzing the acoustic signals. It has been known that at the interaction between metals or inter metallides with hydrogen the most intensive acoustic signals are generated as a result of cracking or crumbling of an initial compact powder sample as a result of the change of material crystal structure under hydrogenation. This work is dedicated to the study into changes occurring in metallic titanium samples at their interaction with hydrogen and followed by acoustic emission signals. In this work the subjects for investigation were specimens of metallic titanium in two various initial forms: titanium sponge and fine titanium powder made of this sponge. The kinetic of the interaction of these materials with hydrogen, the acoustic emission signals accompanying hydrogenation processes and the structure of the materials before and after hydrogenation were investigated. It was determined that in both cases interaction of metallic titanium and hydrogen is followed by acoustic emission signals of high amplitude generated on reaching some certain value of the atomic ratio [H]/[Ti] in a solid phase because of metal cracking at a macrolevel. The typical sizes of the cracks are comparable with particle sizes of hydrogenated specimens. The reasons for cracking are internal stresses initiated in a sample due to the increasing volume of a solid phase as a result of changes in a material crystal lattice under hydrogenation. When the titanium powder is used, the atomic ratio [H]/[Ti] in a solid phase corresponding to the maximum amplitude of an acoustic emission signal are, as a rule, higher than when titanium sponge is used.Keywords: acoustic emission signal, cracking, hydrogenation, titanium specimen
Procedia PDF Downloads 3853560 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes
Authors: Karolina Wieczorek, Sophie Wiliams
Abstract:
Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.Keywords: automated, algorithm, NLP, COVID-19
Procedia PDF Downloads 1013559 Combined Synchrotron Radiography and Diffraction for in Situ Study of Reactive Infiltration of Aluminum into Iron Porous Preform
Authors: S. Djaziri, F. Sket, A. Hynowska, S. Milenkovic
Abstract:
The use of Fe-Al based intermetallics as an alternative to Cr/Ni based stainless steels is very promising for industrial applications that use critical raw materials parts under extreme conditions. However, the development of advanced Fe-Al based intermetallics with appropriate mechanical properties presents several challenges that involve appropriate processing and microstructure control. A processing strategy is being developed which aims at producing a net-shape porous Fe-based preform that is infiltrated with molten Al or Al-alloy. In the present work, porous Fe-based preforms produced by two different methods (selective laser melting (SLM) and Kochanek-process (KE)) are studied during infiltration with molten aluminum. In the objective to elucidate the mechanisms underlying the formation of Fe-Al intermetallic phases during infiltration, an in-house furnace has been designed for in situ observation of infiltration at synchrotron facilities combining x-ray radiography (XR) and x-ray diffraction (XRD) techniques. The feasibility of this approach has been demonstrated, and information about the melt flow front propagation has been obtained. In addition, reactive infiltration has been achieved where a bi-phased intermetallic layer has been identified to be formed between the solid Fe and liquid Al. In particular, a tongue-like Fe₂Al₅ phase adhering to the Fe and a needle-like Fe₄Al₁₃ phase adhering to the Al were observed. The growth of the intermetallic compound was found to be dependent on the temperature gradient present along the preform as well as on the reaction time which will be discussed in view of the different obtained results.Keywords: combined synchrotron radiography and diffraction, Fe-Al intermetallic compounds, in-situ molten Al infiltration, porous solid Fe preforms
Procedia PDF Downloads 2253558 Reverse Logistics Network Optimization for E-Commerce
Authors: Albert W. K. Tan
Abstract:
This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.Keywords: reverse logistics, supply chain management, optimization, e-commerce
Procedia PDF Downloads 383557 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: natural language processing, natural language interfaces, human computer interaction, end user development, dialog systems, data recognition, spreadsheet
Procedia PDF Downloads 3103556 Studying the Effect of Reducing Thermal Processing over the Bioactive Composition of Non-Centrifugal Cane Sugar: Towards Natural Products with High Therapeutic Value
Authors: Laura Rueda-Gensini, Jader Rodríguez, Juan C. Cruz, Carolina Munoz-Camargo
Abstract:
There is an emerging interest in botanicals and plant extracts for medicinal practices due to their widely reported health benefits. A large variety of phytochemicals found in plants have been correlated with antioxidant, immunomodulatory, and analgesic properties, which makes plant-derived products promising candidates for modulating the progression and treatment of numerous diseases. Non-centrifugal cane sugar (NCS), in particular, has been known for its high antioxidant and nutritional value, but composition-wise variability due to changing environmental and processing conditions have considerably limited its use in the nutraceutical and biomedical fields. This work is therefore aimed at assessing the effect of thermal exposure during NCS production over its bioactive composition and, in turn, its therapeutic value. Accordingly, two modified dehydration methods are proposed that employ: (i) vacuum-aided evaporation, which reduces the necessary temperatures to dehydrate the sample, and (ii) window refractance evaporation, which reduces thermal exposure time. The biochemical composition of NCS produced under these two methods was compared to traditionally-produced NCS by estimating their total polyphenolic and protein content with Folin-Ciocalteu and Bradford assays, as well as identifying the major phenolic compounds in each sample via HPLC-coupled mass spectrometry. Their antioxidant activities were also compared as measured by their scavenging potential of ABTS and DPPH radicals. Results show that the two modified production methods enhance polyphenolic and protein yield in resulting NCS samples when compared to traditional production methods. In particular, reducing employed temperatures with vacuum-aided evaporation demonstrated to be superior at preserving polyphenolic compounds, as evidenced both in the total and individual polyphenol concentrations. However, antioxidant activities were not significantly different between these. Although additional studies should be performed to determine if the observed compositional differences affect other therapeutic activities (e.g., anti-inflammatory, analgesic, and immunoprotective), these results suggest that reducing thermal exposure holds great promise for the production of natural products with enhanced nutritional value.Keywords: non-centrifugal cane sugar, polyphenolic compounds, thermal processing, antioxidant activity
Procedia PDF Downloads 903555 Dynamic Simulation of a Hybrid Wind Farm with Wind Turbines and Distributed Compressed Air Energy Storage System
Authors: Eronini Iheanyi Umez-Eronini
Abstract:
Most studies and existing implementations of compressed air energy storage (CAES) coupled with a wind farm to overcome intermittency and variability of wind power are based on bulk or centralized CAES plants. A dynamic model of a hybrid wind farm with wind turbines and distributed CAES, consisting of air storage tanks and compressor and expander trains at each wind turbine station, is developed and simulated in MATLAB. An ad hoc supervisory controller, in which the wind turbines are simply operated under classical power optimizing region control while scheduling power production by the expanders and air storage by the compressors, including modulation of the compressor power levels within a control range, is used to regulate overall farm power production to track minute-scale (3-minutes sampling period) TSO absolute power reference signal, over an eight-hour period. Simulation results for real wind data input with a simple wake field model applied to a hybrid plant composed of ten 5-MW wind turbines in a row and ten compatibly sized and configured Diabatic CAES stations show the plant controller is able to track the power demand signal within an error band size on the order of the electrical power rating of a single expander. This performance suggests that much improved results should be anticipated when the global D-CAES control is combined with power regulation for the individual wind turbines using available approaches for wind farm active power control. For standalone power plant fuel electrical efficiency estimate of up to 60%, the round trip electrical storage efficiency computed for the distributed CAES wherein heat generated by running compressors is utilized in the preheat stage of running high pressure expanders while fuel is introduced and combusted before the low pressure expanders, was comparable to reported round trip storage electrical efficiencies for bulk Adiabatic CAES.Keywords: hybrid wind farm, distributed CAES, diabatic CAES, active power control, dynamic modeling and simulation
Procedia PDF Downloads 823554 The Relation between Cognitive Fluency and Utterance Fluency in Second Language Spoken Fluency: Studying Fluency through a Psycholinguistic Lens
Authors: Tannistha Dasgupta
Abstract:
This study explores the aspects of second language (L2) spoken fluency that are related to L2 linguistic knowledge and processing skill. It draws on Levelt’s ‘blueprint’ of the L2 speaker which discusses the cognitive issues underlying the act of speaking. However, L2 speaking assessments have largely neglected the underlying mechanism involved in language production; emphasis is given on the relationship between subjective ratings of L2 speech sample and objectively measured aspects of fluency. Hence, in this study, the relation between L2 linguistic knowledge and processing skill i.e. Cognitive Fluency (CF), and objectively measurable aspects of L2 spoken fluency i.e. Utterance Fluency (UF) is examined. The participants of the study are L2 learners of English, studying at high school level in Hyderabad, India. 50 participants with intermediate level of proficiency in English performed several lexical retrieval tasks and attention-shifting tasks to measure CF, and 8 oral tasks to measure UF. Each aspect of UF (speed, pause, and repair) were measured against the scores of CF to find out those aspects of UF which are reliable indicators of CF. Quantitative analysis of the data shows that among the three aspects of UF; speed is the best predictor of CF, and pause is weakly related to CF. The study suggests that including the speed aspect of UF could make L2 fluency assessment more reliable, valid, and objective. Thus, incorporating the assessment of psycholinguistic mechanisms into L2 spoken fluency testing, could result in fairer evaluation.Keywords: attention-shifting, cognitive fluency, lexical retrieval, utterance fluency
Procedia PDF Downloads 710