Search results for: hearing aid output speech
687 Utilizing Laser Cutting Method in Men's' Custom-Made Casualwear
Authors: M A. Habit, S. A. Syed-Sahil, A. Bahari
Abstract:
Abstract—Laser cutting is a method of manufacturing process that uses laser in order to cut materials. It provides and ensures extreme accuracy which has a clean cut effect, CO2 laser dominate this application due to their good- quality beam combined with high output power. It comes with a small scale and it has a limitation in cutting sizes of materials, therefore it is more appropriate for custom- made products. The same laser cutting machine is also capable in cutting fine material such as fine silk, cotton, leather, polyester, etc. Lack of explorations and knowledge besides being unaware about this technology had caused many of the designers not to use this laser cutting method in their collections. The objectives of this study are: 1) To identify the potential of laser cutting technique in Custom-Made Garments for men’s casual wear: 2) To experiment the laser cutting technique in custom made garments: 3) To offer guidelines and formula for men’s custom- made casualwear designs with aesthetic value. In order to achieve the objectives, this research has been conducted by using mixed methods which are interviews with two (2) local experts in the apparel manufacturing industries and interviews via telephone with five (5) local respondents who are local emerging fashion designers, the questionnaires were distributed to one hundred (100) respondents around Klang Valley, in order to gain the information about their understanding and awareness regarding laser cutting technology. The experiment was conducted by using natural and man- made fibers. As a conclusion, all of the objectives had been achieved in producing custom-made men’s casualwear and with the production of these attires it will help to educate and enhance the innovation in fine technology. Therefore, there will be a good linkage and collaboration between the design experts and the manufacturing companies.Keywords: custom-made, fashion, laser cut, men’s wear
Procedia PDF Downloads 444686 Maximizing the Aerodynamic Performance of Wind and Water Turbines by Utilizing Advanced Flow Control Techniques
Authors: Edwin Javier Cortes, Surupa Shaw
Abstract:
In recent years, there has been a growing emphasis on enhancing the efficiency and performance of wind and water turbines to meet the increasing demand for sustainable energy sources. One promising approach is the utilization of advanced flow control techniques to optimize aerodynamic performance. This paper explores the application of advanced flow control techniques in both wind and water turbines, aiming to maximize their efficiency and output. By manipulating the flow of air or water around the turbine blades, these techniques offer the potential to improve energy capture, reduce drag, and minimize turbulence-induced losses. The paper will review various flow control strategies, including passive and active techniques such as vortex generators, boundary layer suction, and plasma actuators. It will examine their effectiveness in optimizing turbine performance under different operating conditions and environmental factors. Furthermore, the paper will discuss the challenges and opportunities associated with implementing these techniques in practical turbine designs. It will consider factors such as cost-effectiveness, reliability, and scalability, as well as the potential impact on overall turbine efficiency and lifecycle. Through a comprehensive analysis of existing research and case studies, this paper aims to provide insights into the potential benefits and limitations of advanced flow control techniques for wind and water turbines. It will also highlight areas for future research and development, with the ultimate goal of advancing the state-of-the-art in turbine technology and accelerating the transition towards a more sustainable energy future.Keywords: flow control, efficiency, passive control, active control
Procedia PDF Downloads 72685 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 414684 The Impact of Technology on Physics Development
Authors: Fady Gaml Malk Mossad
Abstract:
these days, distance training that make use of internet generation is used widely all over the international to triumph over geographical and time primarily based issues in schooling. portraits, animation and other auxiliary visual resources help scholar to apprehend the topics easily. specially some theoretical guides which are pretty hard to understand along with physics and chemistry require visual material for college kids to apprehend subjects really. in this look at, physics packages for laboratory of physics path had been advanced. All facilities of internet-primarily based instructional technology have been used for students in laboratory research to avoid making mistakes and to analyze higher physics subjects.Android is a mobile running machine (OS) primarily based at the linux kerrnel and currently developed by way of google. With a user interface based on direct manipulation, Android is designed often for touchscreen cell deviced which includes smartphone and pill laptop, with specialized person interface for tv (Android television), vehicles (Android automobile), and wrist watches (Android wear). Now, nearly all peoples using cellphone. smartphone seems to be a have to-have item, because phone has many benefits. in addition, of course cellphone have many blessings for education, like resume of lesson that shape of 7451f44f4142a41b41fe20fbf0d491b7. but, this text isn't always approximately resume of lesson. this article is ready realistic based on android, precisely for physics. consequently, we can give an explanation for our concept approximately physics’s realistic primarily based on android and for output, we want many students might be like to reading physics and continually don't forget approximately physics’s phenomenon through physics’s sensible based on android.Keywords: physics education, laboratory, web-based education, distance, educationandroid, smartphone, physics practical
Procedia PDF Downloads 15683 The Hague Abduction Convention and the Egyptian Position: Strategizing for a Law Reform
Authors: Abdalla Ahmed Abdrabou Emam Eldeib
Abstract:
For more than a century, the Hague Conference has tackled issues in the most challenging areas of private international law, including family law. Its actions in the realm of international child abduction have been remarkable in two ways during the last two decades. First, on October 25, 1980, the Hague Convention on the Civil Aspects of International Child Abduction (the Convention) was promulgated as an unusually inventive and powerful tool. Second, the Convention is rapidly becoming more prominent in the development of international child law. By that time, overseas travel had grown more convenient, and more couples were marrying or travelling across national lines. At the same time, parental separation and divorce have increased, leading to an increase in international child custody battles. The convention they drafted avoids legal quagmires and addresses extra-legal issues well. It literally restores the kid to its place of usual residence by establishing that the youngster was unlawfully abducted from that position or, alternatively, was wrongfully kept abroad after an allowed visit. Legal custody of a child of a contested parent is usually followed by the child's abduction or unlawful relocation to another country by the non-custodial parent or other persons. If a child's custodial parent lives outside of Egypt, the youngster may be kidnapped and brought to Egypt. It's natural to ask what laws should apply and what legal norms should be followed while hearing individual instances. This study comprehensively evaluates and estimates the relevant Hague Child Abduction Convention and the current situation in Egypt and which law is applicable for child custody. In addition, this research emphasis, detail, and focus on the position of Cross-border parental child abductions in Egypt. Moreover, examine the Islamic law compared to the Hague Convention on Child Custody in detail, as well as mentioning the treatment of Islamic countries in this matter in general and Egypt's treatment of this matter in particular, as well as the criticism directed at Egypt regarding the application and implementation of child custody issues. The present research backs up this method by using non-doctrinal techniques, including surveys, interviews, and dialogues. An important objective of this research is to examine the factors that contribute to parental child abduction. In this case, family court attorneys and other interested parties serve as the target audience from whom data is collected. A survey questionnaire was developed and sent to the target population in order to collect data for future empirical testing to validate the identified critical factors on Parental Child Abduction. The main finding in this study is breaking the reservations of many Muslim countries to join the Hague Convention with regard to child custody., Likewise, clarify the problems of implementation in practice in cases of kidnapping a child from one of the parents and traveling with him outside the borders of the country. Finally, this study is to provide suggestions for reforming the current Egyptian Family Law to make it an effective and efficient for all dispute's resolution mechanism and the possibility of joining The Hague Convention.Keywords: egyptian family law, Hague child abduction convention, child custody, cross-border parental child abductions in egypt
Procedia PDF Downloads 70682 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 469681 Fault Tolerant Control of the Dynamical Systems Based on Internal Structure Systems
Authors: Seyed Mohammad Hashemi, Shahrokh Barati
Abstract:
The problem of fault-tolerant control (FTC) by accommodation method has been studied in this paper. The fault occurs in any system components such as actuators, sensors or internal structure of the system and leads to loss of performance and instability of the system. When a fault occurs, the purpose of the fault-tolerant control is designate strategy that can keep the control loop stable and system performance as much as possible perform it without shutting down the system. Here, the section of fault detection and isolation (FDI) system has been evaluated with regard to actuator's fault. Designing a fault detection and isolation system for a multi input-multi output (MIMO) is done by an unknown input observer, so the system is divided to several subsystems as the effect of other inputs such as disturbing given system state equations. In this observer design method, the effect of these disturbances will weaken and the only fault is detected on specific input. The results of this approach simulation can confirm the ability of the fault detection and isolation system design. After fault detection and isolation, it is necessary to redesign controller based on a suitable modification. In this regard after the use of unknown input observer theory and obtain residual signal and evaluate it, PID controller parameters redesigned for iterative. Stability of the closed loop system has proved in the presence of this method. Also, In order to soften the volatility caused by Annie variations of the PID controller parameters, modifying Sigma as a way acceptable solution used. Finally, the simulation results of three tank popular example confirm the accuracy of performance.Keywords: fault tolerant control, fault detection and isolation, actuator fault, unknown input observer
Procedia PDF Downloads 456680 Effects of Rumen Protozoa and Nitrate on Fermentation and Methane Production
Authors: S. H. Nguyen, L. Li, R. S. Hegarty
Abstract:
Two experiments were conducted assessing the effects of presence or absence of rumen protozoa and dietary nitrate addition on rumen fermentation characteristics and methane production in Brahman heifers. The first experiment assessed changes in rumen fermentation pattern and in-vitro methane production post-refaunation and the second experiment investigated whether addition of nitrate to the incubation would give rise to methane mitigation additional to that contributed by defaunation. Ten Brahman heifers were progressively adapted to a diet containing coconut oil distillate 4.5% (COD) for 18 d and then all heifers were defaunated using sodium 1-(2-sulfonatooxyethoxy) dodecane (Empicol). After 15 d, the heifers were given a second dose of Empicol. Fifteen days after the second dosing, all heifers were allocated to defaunated or refaunated groups by stratified randomisation. On d 48, an oral dose of rumen fluid collected from unrelated faunated cattle was used to inoculate 5 heifers and form a refaunated group so that the effects of re-establishment of protozoa on fermentation characteristics could be investigated. Samples of rumen fluid collected from each animal using oesophageal intubation before feeding on d 48, 55, 62 and 69 were incubated for 23h in-vitro (experiment 1). On day 82, 2% of NO3 (as NaNO3) was included in in-vitro incubations (experiment 2) to test for additivity of NO3 and absence of protozoa effects on fermentation and methane production. It was concluded that increasing protozoal numbers were associated with increased methane production, with methane production rate significantly higher from refaunated heifers than from defaunated heifers 7, 14 and 21 d after refaunation. Concentration and proportions of major VFA, however, were not affected by protozoal treatments. There is scope for further reducing methane output through combining defaunation and dietary nitrate as the addition of nitrate in the defaunated heifers resulted in 86% reduction in methane production in-vitro.Keywords: defaunation, nitrate, fermentation, methane production
Procedia PDF Downloads 559679 Optimum Performance of the Gas Turbine Power Plant Using Adaptive Neuro-Fuzzy Inference System and Statistical Analysis
Authors: Thamir K. Ibrahim, M. M. Rahman, Marwah Noori Mohammed
Abstract:
This study deals with modeling and performance enhancements of a gas-turbine combined cycle power plant. A clean and safe energy is the greatest challenges to meet the requirements of the green environment. These requirements have given way the long-time governing authority of steam turbine (ST) in the world power generation, and the gas turbine (GT) will replace it. Therefore, it is necessary to predict the characteristics of the GT system and optimize its operating strategy by developing a simulation system. The integrated model and simulation code for exploiting the performance of gas turbine power plant are developed utilizing MATLAB code. The performance code for heavy-duty GT and CCGT power plants are validated with the real power plant of Baiji GT and MARAFIQ CCGT plants the results have been satisfactory. A new technology of correlation was considered for all types of simulation data; whose coefficient of determination (R2) was calculated as 0.9825. Some of the latest launched correlations were checked on the Baiji GT plant and apply error analysis. The GT performance was judged by particular parameters opted from the simulation model and also utilized Adaptive Neuro-Fuzzy System (ANFIS) an advanced new optimization technology. The best thermal efficiency and power output attained were about 56% and 345MW respectively. Thus, the operation conditions and ambient temperature are strongly influenced on the overall performance of the GT. The optimum efficiency and power are found at higher turbine inlet temperatures. It can be comprehended that the developed models are powerful tools for estimating the overall performance of the GT plants.Keywords: gas turbine, optimization, ANFIS, performance, operating conditions
Procedia PDF Downloads 426678 Urban Neighborhood Center Location Evaluating Method Based On UNA the GIS Spatial Analysis Tools: Kerman's Neighborhood in Tehran Case
Authors: Sepideh Jabbari Behnam, Shadabeh Gashtasbi Iraei, Elnaz Mohsenin, MohammadAli Aghajani
Abstract:
Urban neighborhoods, as important urban forming cells, play a key role in creating urban texture and integrated form. Nowadays, most of neighborhood divisions are based on urban management systems but without considering social issues and the other aspects of urban life. This can cause problems such as providing inappropriate services for city dwellers, the loss of local identity and etc. In this regard for regenerating of such neighborhoods, it is essential to locate neighborhood centers with appropriate access and services for all residents. The main objective of this article is reaching to the location of neighborhood centers in a way that, most of issues relating to the physical features (such as the form of access network and texture permeability and etc.) and other qualities such as land uses, densities and social and economic features can be done simultaneously. This paper attempts to use methods of spatial analysis in order to surveying spatial structure and space syntax of urban textures and Urban Network Analysis Systems. This can be done by one of GIS toolbars which is named UNA (Urban Network Analysis) with the use of its five functions (include: Reach, Betweenness, Gravity, Closeness, Straightness).These functions were written according to space syntax theory and offer its relating output. This paper tries to locate and evaluate the optimal location of neighborhood centers in order to create local centers. This is done through weighing of each of these functions and taking into account of spatial features.Keywords: evaluate optimal location, Local centers, location of neighborhood centers, Spatial analysis, Urban network
Procedia PDF Downloads 464677 Research on Characteristics and Inventory Planning Counter-Measure of Mature Industrial Zones in the Background of China's New Normal
Authors: Dong Chen, Han Song, Tingting Wei
Abstract:
Industrial zones have made significant contributions to the economic development of Chinese urban areas for decades. In the background of China's New Normal, numbers of mature industrial zones are stepping into a new stage of inventory development instead of increment development. The aim of this study is to discover new characteristics and problems and corresponding inventory planning guidance of mature industrial zones. A case of Yangzhou Hi-Tech Industrial Development Zone is reported in this study. Based on a historical analysis and data analysis of land-use, it is found that land-use of the zone is near saturation and signs of land updating have begun to appear. It is observed that the zone is facing problems including disorder of land development, low economic productivity and single function. Through the data of economic output, tax contribution, industrial category, industry life cycle and environmental influence, a comprehensive assessment based on two dimensions, economic benefits and industrial matchup, is made upon every parcel in the zone. According to the assessment, the zone is divided into spatial units of the update with specific planning guidance. It comes to a conclusion as four directions of inventory planning guidance in mature industrial zones: moving industries with poor economic benefit and negative environmental influence, adding urban function and new industrial function to the zone, optimizing the function of important space, and restricting the mass layout of the real estate industry to provide space for industrial upgrading.Keywords: China's new normal, mature industrial zones, land-use, inventory planning
Procedia PDF Downloads 453676 Pragmatic Development of Chinese Sentence Final Particles via Computer-Mediated Communication
Authors: Qiong Li
Abstract:
This study investigated in which condition computer-mediated communication (CMC) could promote pragmatic development. The focal feature included four Chinese sentence final particles (SFPs), a, ya, ba, and ne. They occur frequently in Chinese, and function as mitigators to soften the tone of speech. However, L2 acquisition of SFPs is difficult, suggesting the necessity of additional exposure to or explicit instruction on Chinese SFPs. This study follows this line and aims to explore two research questions: (1) Is CMC combined with data-driven instruction more effective than CMC alone in promoting L2 Chinese learners’ SFP use? (2) How does L2 Chinese learners’ SFP use change over time, as compared to the production of native Chinese speakers? The study involved 19 intermediate-level learners of Chinese enrolled at a private American university. They were randomly assigned to two groups: (1) the control group (N = 10), which was exposed to SFPs through CMC alone, (2) the treatment group (N = 9), which was exposed to SFPs via CMC and data-driven instruction. Learners interacted with native speakers on given topics through text-based CMC over Skype. Both groups went through six 30-minute CMC sessions on a weekly basis, with a one-week interval after the first two CMC sessions and a two-week interval after the second two CMC sessions (nine weeks in total). The treatment group additionally received a data-driven instruction after the first two sessions. Data analysis focused on three indices: token frequency, type frequency, and acceptability of SFP use. Token frequency was operationalized as the raw occurrence of SFPs per clause. Type frequency was the range of SFPs. Acceptability was rated by two native speakers using a rating rubric. The results showed that the treatment group made noticeable progress over time on the three indices. The production of SFPs approximated the native-like level. In contrast, the control group only slightly improved on token frequency. Only certain SFPs (a and ya) reached the native-like use. Potential explanations for the group differences were discussed in two aspects: the property of Chinese SFPs and the role of CMC and data-driven instruction. Though CMC provided the learners with opportunities to notice and observe SFP use, as a feature with low saliency, SFPs were not easily noticed in input. Data-driven instruction in the treatment group directed the learners’ attention to these particles, which facilitated the development.Keywords: computer-mediated communication, data-driven instruction, pragmatic development, second language Chinese, sentence final particles
Procedia PDF Downloads 418675 Role of Basic Health Units in Provision of Primary Health Services in District Swabi
Authors: Naila Awan, Shahrukh Inam
Abstract:
This study was conducted to highlight the role of basic health units in district Swabi, which provides primary health services to the people of district Swabi having four tehsils. Tehsil Swabi was selected purposively for the study. Three villages were purposively selected from district Swabi. A sum of 110 respondents was randomly selected for interview i.e., 27 from Botakaa, 39 from Gulatee, and 44 from Darra Cham, using proportion allocation sampling technique. A pretested and well-designed interview schedule was used to collect as per the objective and Chi square test was applied to find an association between the quality of medicines and health improvement. The output of the test shows that the government was doing its best and providing enough facilities to the individuals at the healthcare units, and they were utilizing them. These resources were easily accessible to the people of the community. Medicines provided by the government were of good quality and quantity. There were also school health sessions and community health sessions (SHS/CHS) to deliver useful information and awareness regarding health problems and diseases were conducted. The staff of the BHU was present at work time and was performing their duties. The respondents seemed satisfied with their behavior and the duty of the staff. However, there were no emergency resources existing at the BHU after the working hours of the medical staff. It is recommended that government should provide appropriate quantity and quality of medicines to the basic health units so that these healthcare units don’t have to face any shortages regarding medicines at the end of the month. In addition, laboratory and blood testing facilities need to be provided in the basic health units, and also the infrastructure should be made suitable, satisfactory, and more functional.Keywords: community health session, basic health units, outpatient department, tuberculosis
Procedia PDF Downloads 84674 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers
Authors: Raúl Jiménez Vilches
Abstract:
Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.Keywords: focus, prosody, public speaking, Spanish learners of English
Procedia PDF Downloads 101673 Setting Uncertainty Conditions Using Singular Values for Repetitive Control in State Feedback
Authors: Muhammad A. Alsubaie, Mubarak K. H. Alhajri, Tarek S. Altowaim
Abstract:
A repetitive controller designed to accommodate periodic disturbances via state feedback is discussed. Periodic disturbances can be represented by a time delay model in a positive feedback loop acting on system output. A direct use of the small gain theorem solves the periodic disturbances problem via 1) isolating the delay model, 2) finding the overall system representation around the delay model and 3) designing a feedback controller that assures overall system stability and tracking error convergence. This paper addresses uncertainty conditions for the repetitive controller designed in state feedback in either past error feedforward or current error feedback using singular values. The uncertainty investigation is based on the overall system found and the stability condition associated with it; depending on the scheme used, to set an upper/lower limit weighting parameter. This creates a region that should not be exceeded in selecting the weighting parameter which in turns assures performance improvement against system uncertainty. Repetitive control problem can be described in lifted form. This allows the usage of singular values principle in setting the range for the weighting parameter selection. The Simulation results obtained show a tracking error convergence against dynamic system perturbation if the weighting parameter chosen is within the range obtained. Simulation results also show the advantage of weighting parameter usage compared to the case where it is omitted.Keywords: model mismatch, repetitive control, singular values, state feedback
Procedia PDF Downloads 156672 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation
Procedia PDF Downloads 153671 Query Task Modulator: A Computerized Experimentation System to Study Media-Multitasking Behavior
Authors: Premjit K. Sanjram, Gagan Jakhotiya, Apoorv Goyal, Shanu Shukla
Abstract:
In psychological research, laboratory experiments often face the trade-off issue between experimental control and mundane realism. With the advent of Immersive Virtual Environment Technology (IVET), this issue seems to be at bay. However there is a growing challenge within the IVET itself to design and develop system or software that captures the psychological phenomenon of everyday lives. One such phenomena that is of growing interest is ‘media-multitasking’ To aid laboratory researches in media-multitasking this paper introduces Query Task Modulator (QTM), a computerized experimentation system to study media-multitasking behavior in a controlled laboratory environment. The system provides a computerized platform in conducting an experiment for experimenters to study media-multitasking in which participants will be involved in a query task. The system has Instant Messaging, E-mail, and Voice Call features. The answers to queries are provided on the left hand side information panel where participants have to search for it and feed the information in the respective communication media blocks as fast as possible. On the whole the system will collect multitasking behavioral data. To analyze performance there is a separate output table that records the reaction times and responses of the participants individually. Information panel and all the media blocks will appear on a single window in order to ensure multi-modality feature in media-multitasking and equal emphasis on all the tasks (thus avoiding prioritization to a particular task). The paper discusses the development of QTM in the light of current techniques of studying media-multitasking.Keywords: experimentation system, human performance, media-multitasking, query-task
Procedia PDF Downloads 557670 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 196669 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images
Authors: Sophia Shi
Abstract:
Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG
Procedia PDF Downloads 133668 Use of Two-Dimensional Hydraulics Modeling for Design of Erosion Remedy
Authors: Ayoub. El Bourtali, Abdessamed.Najine, Amrou Moussa. Benmoussa
Abstract:
One of the main goals of river engineering is river training, which is defined as controlling and predicting the behavior of a river. It is taking effective measurements to eliminate all related risks and thus improve the river system. In some rivers, the riverbed continues to erode and degrade; therefore, equilibrium will never be reached. Generally, river geometric characteristics and riverbed erosion analysis are some of the most complex but critical topics in river engineering and sediment hydraulics; riverbank erosion is the second answering process in hydrodynamics, which has a major impact on the ecological chain and socio-economic process. This study aims to integrate the new computer technology that can analyze erosion and hydraulic problems through computer simulation and modeling. Choosing the right model remains a difficult and sensitive job for field engineers. This paper makes use of the 5.0.4 version of the HEC-RAS model. The river section is adopted according to the gauged station and the proximity of the adjustment. In this work, we will demonstrate how 2D hydraulic modeling helped clarify the design and cover visuals to set up depth and velocities at riverbanks and throughout advanced structures. The hydrologic engineering center's-river analysis system (HEC-RAS) 2D model was used to create a hydraulic study of the erosion model. The geometric data were generated from the 12.5-meter x 12.5-meter resolution digital elevation model. In addition to showing eroded or overturned river sections, the model output also shows patterns of riverbank changes, which can help us reduce problems caused by erosion.Keywords: 2D hydraulics model, erosion, floodplain, hydrodynamic, HEC-RAS, riverbed erosion, river morphology, resolution digital data, sediment
Procedia PDF Downloads 191667 Using Corpora in Semantic Studies of English Adjectives
Authors: Oxana Lukoshus
Abstract:
The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies
Procedia PDF Downloads 315666 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia
Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa
Abstract:
Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement
Procedia PDF Downloads 40665 Coherent All-Fiber and Polarization Maintaining Source for CO2 Range-Resolved Differential Absorption Lidar
Authors: Erwan Negre, Ewan J. O'Connor, Juha Toivonen
Abstract:
The need for CO2 monitoring technologies grows simultaneously with the worldwide concerns regarding environmental challenges. To that purpose, we developed a compact coherent all-fiber ranged-resolved Differential Absorption Lidar (RR-DIAL). It has been designed along a tunable 2x1fiber optic switch set to a frequency of 1 Hz between two Distributed FeedBack (DFB) lasers emitting in the continuous-wave mode at 1571.41 nm (absorption line of CO2) and 1571.25 nm (CO2 absorption-free line), with linewidth and tuning range of respectively 1 MHz and 3 nm over operating wavelength. A three stages amplification through Erbium and Erbium-Ytterbium doped fibers coupled to a Radio Frequency (RF) driven Acousto-Optic Modulator (AOM) generates 100 ns pulses at a repetition rate from 10 to 30 kHz with a peak power up to 2.5 kW and a spatial resolution of 15 m, allowing fast and highly resolved CO2 profiles. The same afocal collection system is used for the output of the laser source and the backscattered light which is then directed to a circulator before being mixed with the local oscillator for heterodyne detection. Packaged in an easily transportable box which also includes a server and a Field Programmable Gate Array (FPGA) card for on-line data processing and storing, our setup allows an effective and quick deployment for versatile in-situ analysis, whether it be vertical atmospheric monitoring, large field mapping or sequestration site continuous oversight. Setup operation and results from initial field measurements will be discussed.Keywords: CO2 profiles, coherent DIAL, in-situ atmospheric sensing, near infrared fiber source
Procedia PDF Downloads 128664 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 101663 Embodied Communication - Examining Multimodal Actions in a Digital Primary School Project
Authors: Anne Öman
Abstract:
Today in Sweden and in other countries, a variety of digital artefacts, such as laptops, tablets, interactive whiteboards, are being used at all school levels. From an educational perspective, digital artefacts challenge traditional teaching because they provide a range of modes for expression and communication and are not limited to the traditional medium of paper. Digital technologies offer new opportunities for representations and physical interactions with objects, which put forward the role of the body in interaction and learning. From a multimodal perspective the emphasis is on the use of multiple semiotic resources for meaning- making and the study presented here has examined the differential use of semiotic resources by pupils interacting in a digitally designed task in a primary school context. The instances analyzed in this paper come from a case study where the learning task was to create an advertising film in a film-software. The study in focus involves the analysis of a single case with the emphasis on the examination of the classroom setting. The research design used in this paper was based on a micro ethnographic perspective and the empirical material was collected through video recordings of small-group work in order to explore pupils’ communication within the group activity. The designed task described here allowed students to build, share, collaborate upon and publish the redesigned products. The analysis illustrates the variety of communicative modes such as body position, gestures, visualizations, speech and the interaction between these modes and the representations made by the pupils. The findings pointed out the importance of embodied communication during the small- group processes from a learning perspective as well as a pedagogical understanding of pupils’ representations, which were similar from a cultural literacy perspective. These findings open up for discussions with further implications for the school practice concerning the small- group processes as well as the redesigned products. Wider, the findings could point out how multimodal interactions shape the learning experience in the meaning-making processes taking into account that language in a globalized society is more than reading and writing skills.Keywords: communicative learning, interactive learning environments, pedagogical issues, primary school education
Procedia PDF Downloads 410662 Implications of Fulani Herders/Farmers Conflict on the Socio-Economic Development of Nigeria (2000-2018)
Authors: Larry E. Udu, Joseph N. Edeh
Abstract:
Unarguably, the land is an indispensable factor of production and has been instrumental to numerous conflicts between crop farmers and herders in Nigeria. The conflicts pose a grave challenge to life and property, food security and ultimately to sustainable socio-economic development of the nation. The paper examines the causes of the Fulani herders/farmers conflicts, particularly in the Middle Belt; numerity of occurrences and extent of damage and their socio-economic implications. Content Analytical Approach was adopted as methodology wherein data was extensively drawn from the secondary source. Findings reveal that major causes of the conflict are attributable to violation of tradition and laws, trespass and cultural factors. Consequently, the numerity of attacks and level of fatality coupled with displacement of farmers, destruction of private and public facilities impacted negatively on farmers output with their attendant socio-economic implications on sustainable livelihood of the people and the nation at large. For instance, Mercy Corps (a Global Humanitarian Organization) in its research, 2013-2016 asserts that a loss of $14billion within 3 years was incurred and if the conflict were resolved, the average affected household could see increase income by at least 64 percent and potentially 210 percent or higher and that states affected by the conflicts lost an average of 47 percent taxes/IGR. The paper therefore recommends strict adherence to grazing laws; platform for dialogue bothering on compromises where necessary and encouragement of cattle farmers to build ranches for their cattle according to international standards.Keywords: conflict, farmers, herders, Nigeria, socio-economic implications
Procedia PDF Downloads 209661 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study
Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha
Abstract:
Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.Keywords: strabismus, knowledge, attitude, Woreta
Procedia PDF Downloads 63660 The Role of Dialogue in Shared Leadership and Team Innovative Behavior Relationship
Authors: Ander Pomposo
Abstract:
Purpose: The aim of this study was to investigate the impact that dialogue has on the relationship between shared leadership and innovative behavior and the importance of dialogue in innovation. This study wants to contribute to the literature by providing theorists and researchers a better understanding of how to move forward in the studies of moderator variables in the relationship between shared leadership and team outcomes such as innovation. Methodology: A systematic review of the literature, originally adopted from the medical sciences but also used in management and leadership studies, was conducted to synthesize research in a systematic, transparent and reproducible manner. A final sample of 48 empirical studies was scientifically synthesized. Findings: Shared leadership gives a better solution to team management challenges and goes beyond the classical, hierarchical, or vertical leadership models based on the individual leader approach. One of the outcomes that emerge from shared leadership is team innovative behavior. To intensify the relationship between shared leadership and team innovative behavior, and understand when is more effective, the moderating effects of other variables in this relationship should be examined. This synthesis of the empirical studies revealed that dialogue is a moderator variable that has an impact on the relationship between shared leadership and team innovative behavior when leadership is understood as a relational process. Dialogue is an activity between at least two speech partners trying to fulfill a collective goal and is a way of living open to people and ideas through interaction. Dialogue is productive when team members engage relationally with one another. When this happens, participants are more likely to take responsibility for the tasks they are involved and for the relationships they have with others. In this relational engagement, participants are likely to establish high-quality connections with a high degree of generativity. This study suggests that organizations should facilitate the dialogue of team members in shared leadership which has a positive impact on innovation and offers a more adaptive framework for the leadership that is needed in teams working in complex work tasks. These results uncover the necessity of more research on the role that dialogue plays in contributing to important organizational outcomes such as innovation. Case studies describing both best practices and obstacles of dialogue in team innovative behavior are necessary to gain a more detailed insight into the field. It will be interesting to see how all these fields of research evolve and are implemented in dialogue practices in the organizations that use team-based structures to deal with uncertainty, fast-changing environments, globalization and increasingly complex work.Keywords: dialogue, innovation, leadership, shared leadership, team innovative behavior
Procedia PDF Downloads 183659 Query in Grammatical Forms and Corpus Error Analysis
Authors: Katerina Florou
Abstract:
Two decades after coined the term "learner corpora" as collections of texts created by foreign or second language learners across various language contexts, and some years following suggestion to incorporate "focusing on form" within a Task-Based Learning framework, this study aims to explore how learner corpora, whether annotated with errors or not, can facilitate a focus on form in an educational setting. Argues that analyzing linguistic form serves the purpose of enabling students to delve into language and gain an understanding of different facets of the foreign language. This same objective is applicable when analyzing learner corpora marked with errors or in their raw state, but in this scenario, the emphasis lies on identifying incorrect forms. Teachers should aim to address errors or gaps in the students' second language knowledge while they engage in a task. Building on this recommendation, we compared the written output of two student groups: the first group (G1) employed the focusing on form phase by studying a specific aspect of the Italian language, namely the past participle, through examples from native speakers and grammar rules; the second group (G2) focused on form by scrutinizing their own errors and comparing them with analogous examples from a native speaker corpus. In order to test our hypothesis, we created four learner corpora. The initial two were generated during the task phase, with one representing each group of students, while the remaining two were produced as a follow-up activity at the end of the lesson. The results of the first comparison indicated that students' exposure to their own errors can enhance their grasp of a grammatical element. The study is in its second stage and more results are to be announced.Keywords: Corpus interlanguage analysis, task based learning, Italian language as F1, learner corpora
Procedia PDF Downloads 54658 Relationship between Wave Velocities and Geo-Pressures in Shallow Libyan Carbonate Reservoir
Authors: Tarek Sabri Duzan
Abstract:
Knowledge of the magnitude of Geo-pressures (Pore, Fracture & Over-burden pressures) is vital especially during drilling, completions, stimulations, Enhance Oil Recovery. Many times problems, like lost circulation could have been avoided if techniques for calculating Geo-pressures had been employed in the well planning, mud weight plan, and casing design. In this paper, we focused on the relationships between Geo-pressures and wave velocities (P-Wave (Vp) and S-wave (Vs)) in shallow Libyan carbonate reservoir in the western part of the Sirte Basin (Dahra F-Area). The data used in this report was collected from four new wells recently drilled. Those wells were scattered throughout the interested reservoir as shown in figure-1. The data used in this work are bulk density, Formation Mult -Tester (FMT) results and Acoustic wave velocities. Furthermore, Eaton Method is the most common equation used in the world, therefore this equation has been used to calculate Fracture pressure for all wells using dynamic Poisson ratio calculated by using acoustic wave velocities, FMT results for pore pressure, Overburden pressure estimated by using bulk density. Upon data analysis, it has been found that there is a linear relationship between Geo-pressures (Pore, Fracture & Over-Burden pressures) and wave velocities ratio (Vp/Vs). However, the relationship was not clear in the high-pressure area, as shown in figure-10. Therefore, it is recommended to use the output relationship utilizing the new seismic data for shallow carbonate reservoir to predict the Geo-pressures for future oil operations. More data can be collected from the high-pressure zone to investigate more about this area.Keywords: bulk density, formation mult-tester (FMT) results, acoustic wave, carbonate shalow reservoir, d/jfield velocities
Procedia PDF Downloads 287