Search results for: injection point
4459 Sensitivity Assessment of Spectral Salinity Indices over Desert Sabkha of Western UAE
Authors: Rubab Ammad, Abdelgadir Abuelgasim
Abstract:
UAE typically lies in one of the aridest regions of the world and is thus home to geologic features common to such climatic conditions including vast open deserts, sand dunes, saline soils, inland Sabkha and coastal Sabkha. Sabkha are characteristic salt flats formed in arid environment due to deposition and precipitation of salt and silt over sand surface because of low laying water table and rates of evaporation exceeding rates of precipitation. The study area, which comprises of western UAE, is heavily concentrated with inland Sabkha. Remote sensing is conventionally used to study the soil salinity of agriculturally degraded lands but not so broadly for Sabkha. The focus of this study was to identify these highly saline Sabkha areas on remotely sensed data, using salinity indices. The existing salinity indices in the literature have been designed for agricultural soils and they have not frequently used the spectral response of short-wave infra-red (SWIR1 and SWIR2) parts of electromagnetic spectrum. Using Landsat 8 OLI data and field ground truthing, this study formulated indices utilizing NIR-SWIR parts of spectrum and compared the results with existing salinity indices. Most indices depict reasonably good relationship between salinity and spectral index up until a certain value of salinity after which the reflectance reaches a saturation point. This saturation point varies with index. However, the study findings suggest a role of incorporating near infra-red and short-wave infra-red in salinity index with a potential of showing a positive relationship between salinity and reflectance up to a higher salinity value, compared to rest.Keywords: Sabkha, salinity index, saline soils, Landsat 8, SWIR1, SWIR2, UAE desert
Procedia PDF Downloads 2164458 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective
Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice
Abstract:
The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation
Procedia PDF Downloads 2864457 Knee Pain Reduction: Holistic vs. Traditional
Authors: Renee Moten
Abstract:
Introduction: Knee pain becomes chronic because the therapy used focuses only on the symptoms of knee pain and not the causes of knee pain. Preventing knee injuries is not in the toolbox of the traditional practitioner. This research was done to show that we must reduce the inflammation (holistically), reduce the swelling and regain flexibility before considering any type of exercise. This method of performing the correct exercise stops the bowing of the knee, corrects the walking gait, and starts to relieve knee, hip, back, and shoulder pain. Method: The holistic method that is used to heal knees is called the Knee Pain Recipe. It’s a six step system that only uses alternative medicine methods to reduce, relieve and restore knee joint mobility. The system is low cost, with no hospital bills, no physical therapy, and no painkillers that can cause damage to the kidneys and liver. This method has been tested on 200 women with knee, back, hip, and shoulder pain. Results: All 200 women reduce their knee pain by 50%, some by as much as 90%. Learning about ankle and foot flexibility, along with understanding the kinetic chain, helps improve the walking gait, which takes the pressure off the knee, hip and back. The knee pain recipe also has helped to reduce the need for a cortisone injection, stem cell procedures, to take painkillers, and surgeries. What has also been noted in the research was that if the women's knees were too far gone, the Knee Pain Recipe helped prepare the women for knee replacement surgery. Conclusion: It is believed that the Knee Pain Recipe, when performed by men and women from around the world, will give them a holistic alternative to drugs, injections, and surgeries.Keywords: knee, surgery, healing, holistic
Procedia PDF Downloads 764456 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor
Authors: Ejaz Ahmed, Huang Yong
Abstract:
The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.Keywords: CFD, combustion, gas turbine combustor, lean blowout
Procedia PDF Downloads 2694455 Material and Parameter Analysis of the PolyJet Process for Mold Making Using Design of Experiments
Authors: A. Kampker, K. Kreisköther, C. Reinders
Abstract:
Since additive manufacturing technologies constantly advance, the use of this technology in mold making seems reasonable. Many manufacturers of additive manufacturing machines, however, do not offer any suggestions on how to parameterize the machine to achieve optimal results for mold making. The purpose of this research is to determine the interdependencies of different materials and parameters within the PolyJet process by using design of experiments (DoE), to additively manufacture molds, e.g. for thermoforming and injection molding applications. Therefore, the general requirements of thermoforming molds, such as heat resistance, surface quality and hardness, have been identified. Then, different materials and parameters of the PolyJet process, such as the orientation of the printed part, the layer thickness, the printing mode (matte or glossy), the distance between printed parts and the scaling of parts, have been examined. The multifactorial analysis covers the following properties of the printed samples: Tensile strength, tensile modulus, bending strength, elongation at break, surface quality, heat deflection temperature and surface hardness. The key objective of this research is that by joining the results from the DoE with the requirements of the mold making, optimal and tailored molds can be additively manufactured with the PolyJet process. These additively manufactured molds can then be used in prototyping processes, in process testing and in small to medium batch production.Keywords: additive manufacturing, design of experiments, mold making, PolyJet, 3D-Printing
Procedia PDF Downloads 2564454 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction
Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum
Abstract:
Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography
Procedia PDF Downloads 3264453 Bioinformatics Approach to Support Genetic Research in Autism in Mali
Authors: M. Kouyate, M. Sangare, S. Samake, S. Keita, H. G. Kim, D. H. Geschwind
Abstract:
Background & Objectives: Human genetic studies can be expensive, even unaffordable, in developing countries, partly due to the sequencing costs. Our aim is to pilot the use of bioinformatics tools to guide scientifically valid, locally relevant, and economically sound autism genetic research in Mali. Methods: The following databases, NCBI, HGMD, and LSDB, were used to identify hot point mutations. Phenotype, transmission pattern, theoretical protein expression in the brain, the impact of the mutation on the 3D structure of the protein) were used to prioritize selected autism genes. We used the protein database, Modeller, and clustal W. Results: We found Mef2c (Gly27Ala/Leu38Gln), Pten (Thr131IIle), Prodh (Leu289Met), Nme1 (Ser120Gly), and Dhcr7 (Pro227Thr/Glu224Lys). These mutations were associated with endonucleases BseRI, NspI, PfrJS2IV, BspGI, BsaBI, and SpoDI, respectively. Gly27Ala/Leu38Gln mutations impacted the 3D structure of the Mef2c protein. Mef2c protein sequences across species showed a high percentage of similarity with a highly conserved MADS domain. Discussion: Mef2c, Pten, Prodh, Nme1, and Dhcr 7 gene mutation frequencies in the Malian population will be very informative. PCR coupled with restriction enzyme digestion can be used to screen the targeted gene mutations. Sanger sequencing will be used for confirmation only. This will cut down considerably the sequencing cost for gene-to-gene mutation screening. The knowledge of the 3D structure and potential impact of the mutations on Mef2c protein informed the protein family and altered function (ex. Leu38Gln). Conclusion & Future Work: Bio-informatics will positively impact autism research in Mali. Our approach can be applied to another neuropsychiatric disorder.Keywords: bioinformatics, endonucleases, autism, Sanger sequencing, point mutations
Procedia PDF Downloads 844452 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances (µEDM)
Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann
Abstract:
The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, the initial gap, has been studied. This analysis helps to improve the machining performances, such: the workpiece surface condition and the lateral crater's gap.Keywords: craters, electrical discharges, micro-electrical discharge machining (µEDM), microsystems
Procedia PDF Downloads 974451 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception
Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom
Abstract:
Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots
Procedia PDF Downloads 1974450 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 1344449 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association
Authors: Jacky Liu
Abstract:
This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation
Procedia PDF Downloads 1044448 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations
Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman
Abstract:
Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.Keywords: block, backward differentiation formulas, first order, fuzzy differential equations
Procedia PDF Downloads 3224447 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3414446 A Framework for Teaching the Intracranial Pressure Measurement through an Experimental Model
Authors: Christina Klippel, Lucia Pezzi, Silvio Neto, Rafael Bertani, Priscila Mendes, Flavio Machado, Aline Szeliga, Maria Cosendey, Adilson Mariz, Raquel Santos, Lys Bendett, Pedro Velasco, Thalita Rolleigh, Bruna Bellote, Daria Coelho, Bruna Martins, Julia Almeida, Juliana Cerqueira
Abstract:
This project presents a framework for teaching intracranial pressure monitoring (ICP) concepts using a low-cost experimental model in a neurointensive care education program. Data concerning ICP monitoring contribute to the patient's clinical assessment and may dictate the course of action of a health team (nursing, medical staff) and influence decisions to determine the appropriate intervention. This study aims to present a safe method for teaching ICP monitoring to medical students in a Simulation Center. Methodology: Medical school teachers, along with students from the 4th year, built an experimental model for teaching ICP measurement. The model consists of a mannequin's head with a plastic bag inside simulating the cerebral ventricle and an inserted ventricular catheter connected to the ICP monitoring system. The bag simulating the ventricle can also be changed for others containing bloody or infected simulated cerebrospinal fluid. On the mannequin's ear, there is a blue point indicating the right place to set the "zero point" for accurate pressure reading. The educational program includes four steps: 1st - Students receive a script on ICP measurement for reading before training; 2nd - Students watch a video about the subject created in the Simulation Center demonstrating each step of the ICP monitoring and the proper care, such as: correct positioning of the patient, anatomical structures to establish the zero point for ICP measurement and a secure range of ICP; 3rd - Students train the procedure in the model. Teachers help students during training; 4th - Student assessment based on a checklist form. Feedback and correction of wrong actions. Results: Students expressed interest in learning ICP monitoring. Tests concerning the hit rate are still being performed. ICP's final results and video will be shown at the event. Conclusion: The study of intracranial pressure measurement based on an experimental model consists of an effective and controlled method of learning and research, more appropriate for teaching neurointensive care practices. Assessment based on a checklist form helps teachers keep track of student learning progress. This project offers medical students a safe method to develop intensive neurological monitoring skills for clinical assessment of patients with neurological disorders.Keywords: neurology, intracranial pressure, medical education, simulation
Procedia PDF Downloads 1734445 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction
Authors: Ben Haines, Li Bai
Abstract:
Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency
Procedia PDF Downloads 2074444 Lie Symmetry of a Nonlinear System Characterizing Endemic Malaria
Authors: Maba Boniface Matadi
Abstract:
This paper analyses the model of Malaria endemic from the point of view of the group theoretic approach. The study identified new independent variables that lead to the transformation of the nonlinear model. Furthermore, corresponding determining equations were constructed, and new symmetries were found. As a result, the findings of the study demonstrate of the integrability of the model to present an invariant solution for the Malaria model.Keywords: group theory, lie symmetry, invariant solutions, malaria
Procedia PDF Downloads 1114443 Role of NaCl and Temperature in Glycerol Mediated Rapid Growth of Silver Nanostructures
Authors: L. R. Shobin, S. Manivannan
Abstract:
One dimensional silver nanowires and nanoparticles gained more interest in developing transparent conducting films, catalysis, biological and chemical sensors. Silver nanostructures can be synthesized by varying reaction conditions such as the precursor concentration, molar ratio of the surfactant, injection speed of silver ions, etc. in the polyol process. However, the reaction proceeds for greater than 2 hours for the formation of silver nanowires. The introduction of etchant in the medium promotes the growth of silver nanowires from silver nanoparticles along the [100] direction. Rapid growth of silver nanowires is accomplished using the Cl- ions from NaCl and polyvinyl pyrrolidone (PVP) as surfactant. The role of Cl- ion was investigated in the growth of the nanostructured silver. Silver nanoparticles (<100 nm) were harvested from glycerol medium in the absence of Cl- ions. Trace amount of Cl- ions (2.5 mM -NaCl) produced the edge joined nanowires of length upto 2 μm and width ranging from 40 to 65 nm. Formation and rapid growth (within 25 minutes) of long, uniform silver nanowires (upto 5 μm) with good yield were realized in the presence of 5 mM NaCl at 200ºC. The growth of nanostructures was monitored by UV-vis-NIR spectroscopy. Scanning and transmission electron microscopes reveal the morphology of the silver nano harvests. The role of temperature in the reduction of silver ions, growth mechanism for nanoparticles, edge joined and straight nanowires will be discussed.Keywords: silver nanowires, glycerol mediated polyol process, scanning electron microscopy, UV-Vis- NIR spectroscopy, transmission electron microscopy
Procedia PDF Downloads 3044442 A pH-Activatable Nanoparticle Self-Assembly Triggered by 7-Amino Actinomycin D Demonstrating Superior Tumor Fluorescence Imaging and Anticancer Performance
Authors: Han Xiao
Abstract:
The development of nanomedicines has recently achieved several breakthroughs in the field of cancer treatment; however, the biocompatibility and targeted burst release of these medications remain a limitation, which leads to serious side effects and significantly narrows the scope of their applications. The self-assembly of intermediate filament protein (IFP) peptides was triggered by a hydrophobic cation drug 7-amino actinomycin D (7-AAD) to synthesize pH-activatable nanoparticles (NPs) that could simultaneously locate tumors and produce antitumor effects. The designed IFP peptide included a target peptide (arginine–glycine–aspartate), a negatively charged region, and an α-helix sequence. It also possessed the ability to encapsulate 7-AAD molecules through the formation of hydrogen bonds and hydrophobic interactions by a one-step method. 7-AAD molecules with excellent near-infrared fluorescence properties could be target delivered into tumor cells by NPs and released immediately in the acidic environments of tumors and endosome/lysosomes, ultimately inducing cytotoxicity by arresting the tumor cell cycle with inserted DNA. It is noteworthy that the IFP/7-AAD NPs tail vein injection approach demonstrated not only high tumor-targeted imaging potential, but also strong antitumor therapeutic effects in vivo. The proposed strategy may be used in the delivery of cationic antitumor drugs for precise imaging and cancer therapy.Keywords: 7-amino actinomycin D, intermediate filament protein, nanoparticle, tumor image
Procedia PDF Downloads 1394441 Achieving Appropriate Use of Antibiotics through Pharmacists’ Intervention at Practice Point: An Indian Study Report
Authors: Parimalakrishnan Sundararjan, Madheswaran Murugan, Dhanya Dharman, Yatindra Kumar, Sudhir Singh Gangwar, Guru Prasad Mohanta
Abstract:
Antibiotic resistance AR is a global issue, India started to redress the issues of antibiotic resistance late and it plans to have: active surveillance of microbial resistance and promote appropriate use of antibiotics. The present study attempted to achieve appropriate use of antibiotics through pharmacists’ intervention at practice point. In a quasi-experimental prospective cohort study, the cases with bacteremia from four hospitals were identified during 2015 and 2016 for intervention. The pharmacists centered intervention: active screening of each prescription and comparing with the selection of antibiotics with susceptibility of the bacteria. Wherever irrationality noticed, it was brought to the notice of the treating physician for making changes. There were two groups: intervention group and control group without intervention. The active screening and intervention in 915 patients has reduced therapeutic regimen time in patients with bacteremia. The intervention group showed the decreased duration of hospital stay 3.4 days from 5.1 days. Further, multivariate modeling of patients who were in control group showed that patients in the intervention group had a significant decrease in both duration of hospital stay and infection-related mortality. Unlike developed countries, pharmacists are not active partners in patient care in India. This unique attempt of pharmacist’ invention was planned in consultation with hospital authorities which proved beneficial in terms of reducing the duration of treatment, hospital stay, and infection-related mortality. This establishes the need for a collaborative decision making among the health workforce in patient care at least for promoting rational use of antibiotics, an attempt to combat resistance.Keywords: antibiotics resistance, intervention, bacteremia, multivariate modeling
Procedia PDF Downloads 1834440 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy
Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi
Abstract:
The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.Keywords: aerosol, biomass burning, seacoast, urban area
Procedia PDF Downloads 1304439 The Biomechanical Analysis of Pelvic Osteotomies Applied for Developmental Dysplasia of the Hip Treatment in Pediatric Patients
Authors: Suvorov Vasyl, Filipchuk Viktor
Abstract:
Developmental Dysplasia of the Hip (DDH) is a frequent pathology in pediatric orthopedist’s practice. Neglected or residual cases of DDH in walking patients are usually treated using pelvic osteotomies. Plastic changes take place in hinge points due to acetabulum reorientation during surgery. Classically described hinge points and a traditional division of pelvic osteotomies on reshaping and reorientation are currently debated. The purpose of this article was to evaluate biomechanical changes during the most commonly used pelvic osteotomies (Salter, Dega, Pemberton) for DDH treatment in pediatric patients. Methods: virtual pelvic models of 2- and 6-years old patients were created, material properties were assigned, pelvic osteotomies were simulated and biomechanical changes were evaluated using finite element analysis (FEA). Results: it was revealed that the patient's age has an impact on pelvic bones and cartilages density (in younger patients the pelvic elements are more pliable - p<0.05). Stress distribution after each of the abovementioned pelvic osteotomy was assessed in 2- and 6-years old patients’ pelvic models; hinge points were evaluated. The new term "restriction point" was introduced, which means a place where restriction of acetabular deformity correction occurs. Pelvic ligaments attachment points were mainly these restriction points. Conclusions: it was found out that there are no purely reshaping and reorientation pelvic osteotomies as previously believed; the pelvic ring acts as a unit in carrying out the applied load. Biomechanical overload of triradiate cartilage during Salter osteotomy in 2-years old patient and in 2- and 6-years old patients during Pemberton osteotomy was revealed; overload of the posterior cortical layer in the greater sciatic notch in 2-years old patient during Dega osteotomy was revealed. Level of Evidence – Level IV, prognostic.Keywords: developmental dysplasia of the hip, pelvic osteotomy, finite element analysis, hinge point, biomechanics
Procedia PDF Downloads 1034438 Enhancement Production and Development of Hot Dry Rock System by Using Supercritical CO2 as Working Fluid Instead of Water to Advance Indonesia's Geothermal Energy
Authors: Dhara Adhnandya Kumara, Novrizal Novrizal
Abstract:
Hot Dry Rock (HDR) is one of geothermal energy which is abundant in many provinces in Indonesia. Heat exploitation from HDR would need a method which injects fluid to subsurface to crack the rock and sweep the heat. Water is commonly used as the working fluid but known to be less effective in some ways. The new research found out that Supercritical CO2 (SCCO2) can be used to replace water as the working fluid. By studying heat transfer efficiency, pumping power, and characteristics of the returning fluid, we might decide how effective SCCO2 to replace water as working fluid. The method used to study those parameters quantitatively could be obtained from pre-existing researches which observe the returning fluids from the same reservoir with same pumping power. The result shows that SCCO2 works better than water. For cold and hot SCCO2 has lower density difference than water, this results in higher buoyancy in the system that allows the fluid to circulate with lower pumping power. Besides, lower viscosity of SCCO2 impacts in higher flow rate in circulation. The interaction between SCCO2 and minerals in reservoir could induce dehydration of the minerals and enhancement of rock porosity and permeability. While the dissolution and transportation of minerals by SCCO2 are unlikely to occur because of the nature of SCCO2 as poor solvent, and this will reduce the mineral scaling in the system. Under those conditions, using SCCO2 as working fluid for HDR extraction would give great advantages to advance geothermal energy in Indonesia.Keywords: geothermal, supercritical CO2, injection fluid, hot dry rock
Procedia PDF Downloads 2184437 Three-Dimensional Measurement and Analysis of Facial Nerve Recess
Authors: Kang Shuo-Shuo, Li Jian-Nan, Yang Shiming
Abstract:
Purpose: The three-dimensional anatomical structure of the facial nerve recess and its relationship were measured by high-resolution temporal bone CT to provide imaging reference for cochlear implant operation. Materials and Methods: By analyzing the high-resolution CT of 160 cases (320 pleural ears) of the temporal bone, the following parameters were measured at the axial window niche level: 1. The distance between the facial nerve and chordae tympani nerve d1; 2. Distance between the facial nerve and circular window niche d2; 3. The relative Angle between the facial nerve and the circular window niche a; 4. Distance between the middle point of the face recess and the circular window niche d3; 5. The relative angle between the middle point of the face recess and the circular window niche b. Factors that might influence the anatomy of the facial recess were recorded, including the patient's sex, age, and anatomical variation (e.g., vestibular duct dilation, mastoid gas type, mothoid sinus advancement, jugular bulbar elevation, etc.), and the correlation between these factors and the measured facial recess parameters was analyzed. Result: The mean value of face-drum distance d1 is (3.92 ± 0.26) mm, the mean value of face-niche distance d2 is (5.95 ± 0.62) mm, the mean value of face-niche Angle a is (94.61 ± 9.04) °, and the mean value of fossa - niche distance d3 is (6.46 ± 0.63) mm. The average fossa-niche Angle b was (113.47 ± 7.83) °. Gender, age, and anterior sigmoid sinus were the three factors affecting the width of the opposite recess d1, the Angle of the opposite nerve relative to the circular window niche a, and the Angle of the facial recess relative to the circular window niche b. Conclusion: High-resolution temporal bone CT before cochlear implantation can show the important anatomical relationship of the facial nerve recess, and the measurement results have clinical reference value for the operation of cochlear implantation.Keywords: cochlear implantation, recess of facial nerve, temporal bone CT, three-dimensional measurement
Procedia PDF Downloads 194436 Social Responsibility and Environmental Issues Addressed by Businesses in Romania
Authors: Daniela Gradinaru, Iuliana Georgescu, Loredana Hutanu (Toma), Mihai-Bogdan Afrasinei
Abstract:
This article aims to analyze the situation of Romanian companies from an environmental point of view. Environmental issues are addressed very often nowadays, and they reach and affect every domain, including the economical one. Implementing an environmental management system will not only help the companies to comply with laws and regulations, but, above all, will offer them an important competitive advantage.Keywords: environmental management system, environmental reporting, environmental expenses, sustainable development
Procedia PDF Downloads 4184435 Biodiesel Production from Yellow Oleander Seed Oil
Authors: S. Rashmi, Devashish Das, N. Spoorthi, H. V. Manasa
Abstract:
Energy is essential and plays an important role for overall development of a nation. The global economy literally runs on energy. The use of fossil fuels as energy is now widely accepted as unsustainable due to depleting resources and also due to the accumulation of greenhouse gases in the environment, renewable and carbon neutral biodiesel are necessary for environment and economic sustainability. Unfortunately biodiesel produced from oil crop, waste cooking oil and animal fats are not able to replace fossil fuel. Fossil fuels remain the dominant source of primary energy, accounting for 84% of the overall increase in demand. Today biodiesel has come to mean a very specific chemical modification of natural oils. Objectives: To produce biodiesel from yellow oleander seed oil, to test the yield of biodiesel using different types of catalyst (KOH & NaOH). Methodology: Oil is extracted from dried yellow oleander seeds using Soxhlet extractor and oil expeller (bulk). The FFA content of the oil is checked and depending on the FFA value either two steps or single step process is followed to produce biodiesel. Two step processes includes esterfication and transesterification, single step includes only transesterification. The properties of biodiesel are checked. Engine test is done for biodiesel produced. Result: It is concluded that biodiesel quality parameters such as yield(85% & 90%), flash point(1710C & 1760C),fire point(1950C & 1980C), viscosity(4.9991 and 5.21 mm2/s) for the biodiesel from seed oil of Thevetiaperuviana produced by using KOH & NaOH respectively. Thus the seed oil of Thevetiaperuviana is a viable feedstock for good quality fuel.The outcomes of our project are a substitute for conventional fuel, to reduce petro diesel requirement,improved performance in terms of emissions. Future prospects: Optimization of biodiesel production using response surface method.Keywords: yellow oleander seeds, biodiesel, quality parameters, renewable sources
Procedia PDF Downloads 4484434 Combustion Chamber Sizing for Energy Recovery from Furnace Process Gas: Waste to Energy
Authors: Balram Panjwani, Bernd Wittgens, Jan Erik Olsen, Stein Tore Johansen
Abstract:
The Norwegian ferroalloy industry is a world leader in sustainable production of ferrosilicon, silicon and manganese alloys with the lowest global specific energy consumption. One of the byproducts during the metal reduction process is energy rich off-gas and usually this energy is not harnessed. A novel concept for sustainable energy recovery from ferroalloy off-gas is discussed. The concept is founded on the idea of introducing a combustion chamber in the off-gas section in which energy rich off-gas mainly consisting of CO will be combusted. This will provide an additional degree of freedom for optimizing energy recovery. A well-controlled and high off-gas temperature will assure a significant increase in energy recovery and reduction of emissions to the atmosphere. Design and operation of the combustion chamber depend on many parameters, including the total power capacity of the combustion chamber, sufficient residence time for combusting the complex Poly Aromatic Hydrocarbon (PAH), NOx, as well as converting other potential pollutants. The design criteria for the combustion chamber have been identified and discussed and sizing of the combustion chamber has been carried out considering these design criteria. Computational Fluid Dynamics (CFD) has been utilized extensively for sizing the combustion chamber. The results from our CFD simulations of the flow in the combustion chamber and exploring different off-gas fuel composition are presented. In brief, the paper covers all aspect which impacts the sizing of the combustion chamber, including insulation thickness, choice of insulating material, heat transfer through extended surfaces, multi-staging and secondary air injection.Keywords: CFD, combustion chamber, arc furnace, energy recovery
Procedia PDF Downloads 3224433 Fabrication of a Potential Point-of-Care Device for Hemoglobin A1c: A Lateral Flow Immunosensor
Authors: Shu Hwang Ang, Choo Yee Yu, Geik Yong Ang, Yean Yean Chan, Yatimah Binti Alias, And Sook Mei Khor
Abstract:
With the high prevalence of Type 2 diabetes mellitus across the world, the morbidities and mortalities associated with Type 2 diabetes have significant impact on the production line for a nation. With routine scheduled clinical visits to manage Type 2 diabetes, diabetic patients with hectic lifestyles can have low clinical compliance. Hence, it often decreases the effectiveness of diabetic management personalized for each diabetic patient. Here, we report a useful developed point-of-care (POC) device that detect glycated hemoglobin (HbA1c, biomarker for long-term Type 2 diabetic management). In fact, the established POC devices certified to be used in clinical setting are not only expensive ($ 8 to $10 per test), they also require skillful practitioners to perform sampling and interpretation. As a paper-based biosensor, the developed HbA1c biosensor utilized lateral flow principle to offer an alternative for cost-effective (approximately $2 per test) and end-user friendly device for household testing. Requiring as little as 2 L of finger-picked blood, the test can be performed at the household with just simple dilution and washings. With visual interpretation of numbers of test lines shown on the developed biosensor, it can be interpreted as easy as a urine pregnancy test, aided with scale of intensity provided. In summary, the developed HbA1c immunosensor has been tested to have high selectivity towards HbA1c, and is stable with reasonably good performance in clinical testing. Therefore, our developed HbA1c immunosensor has high potential to be an effective diabetic management tool to increase patient compliance and thus contain the progression of the diabetes.Keywords: blood, glycated hemoglobin (HbA1c), lateral flow, type 2 diabetes mellitus
Procedia PDF Downloads 5284432 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI
Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist
Abstract:
Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma
Procedia PDF Downloads 1454431 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit
Procedia PDF Downloads 1474430 Project Based Learning in Language Lab: An Analysis in ESP Learning Context
Authors: S. Priya
Abstract:
A project based learning assignment in English for Specific Purposes (ESP) context based on Communicative English as prescribed in the university syllabus for engineering students and its learning outcome from ESP context is the focus of analysis through this paper. The task based on Project Based Learning (PBL) was conducted in the digital language lab which had audio visual aids to support the team presentation. The total strength of 48 students of Mechanical Branch were divided into 6 groups, each consisting of 8 students. The group members were selected on random numbering basis. They were given a group task to represent a power point presentation on a topic related to their core branch. They had to discuss the issue and choose their topic and represent in a given format. It provided the individual role of each member in the presentation. A brief overview of the project and the outcome of its technical aspects were also had to be included. Each group had to highlight the contributions of that innovative technology through their presentation. The power point should be provided in a CD format. The variations in the choice of subjects, their usage of digital technologies, co-ordination for competition, learning experience of first time stage presentation, challenges of team cohesiveness were some criteria observed as their learning experience. For many other students undergoing the stages of planning, preparation and practice as steps for presentation had been the learning outcomes as given through their feedback form. The evaluation pattern is distributed for individual contribution and group effectiveness which promotes quality of presentation. The evaluated skills are communication skills, group cohesiveness, and audience response, quality of technicality and usage of technical terms. This paper thus analyses how project based learning improves the communication, life skills and technical skills in English for Specific learning context through PBL.Keywords: language lab, ESP context, communicative skills, life skills
Procedia PDF Downloads 240