Search results for: channel error correction
844 A Numerical Investigation of Total Temperature Probes Measurement Performance
Authors: Erdem Meriç
Abstract:
Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes
Procedia PDF Downloads 145843 Layer-by-Layer Modified Ceramic Membranes for Micropollutant Removal
Authors: Jenny Radeva, Anke-Gundula Roth, Christian Goebbert, Robert Niestroj-Pahl, Lars Daehne, Axel Wolfram, Juergen Wiese
Abstract:
Ceramic membranes for water purification combine excellent stability with long-life characteristics and high chemical resistance. Layer-by-Layer coating is a well-known technique for customization and optimization of filtration properties of membranes but is mostly used on polymeric membranes. Ceramic membranes comprising a metal oxide filtration layer of Al2O3 or TiO2 are charged and therefore highly suitable for polyelectrolyte adsorption. The high stability of the membrane support allows efficient backwash and chemical cleaning of the membrane. The presented study reports metal oxide/organic composite membrane with an increased rejection of bivalent salts like MgSO4 and the organic micropollutant Diclofenac. A self-build apparatus was used for applying the polyelectrolyte multilayers on the ceramic membrane. The device controls the flow and timing of the polyelectrolytes and washing solutions. As support for the Layer-by-Layer coat, ceramic mono-channel membranes were used with an inner capillary of 8 mm diameter, which is connected to the coating device. The inner wall of the capillary is coated subsequently with polycat- and anions. The filtration experiments were performed with a feed solution of MgSO4 and Diclofenac. The salt content of the permeate was detected conductometrically and Diclofenac was measured with UV-Adsorption. The concluded results show retention values of magnesium sulfate of 70% and diclofenac retention of 60%. Further experimental research studied various parameters of the composite membrane-like Molecular Weight Cut Off and pore size, Zeta potential and its mechanical and chemical robustness.Keywords: water purification, polyelectrolytes, membrane modification, layer-by-layer coating, ceramic membranes
Procedia PDF Downloads 251842 Citizen Journalist: A Case Study of Audience Participation in Mainstream TV News Production in India
Authors: Sindhu Manjesh
Abstract:
This paper examines citizen journalism in India, specifically the inclusion of user-generated content (UGC) by mainstream media, by focusing on the case study of the Citizen Journalist show on CNN-News 18, a national television news broadcaster. It studies the processes of production involved in Citizen Journalist to find out how professional journalists and citizens interact to put together the show in order to help readers understand the relationship between journalists and the public in the evolving media landscape of India, the world’s largest democracy, and a leader in the Global South. Using an in-depth case study approach involving newsroom ethnography, interviews, and an examination of Citizen Journalist content, it studies the implications of audience participation for traditional journalistic routines and values – specifically gatekeeping and objectivity. Citizen Journalist began to much fanfare and promise about including neglected citizen views and voices. Based on evidence gathered, this study, however, argues that claims made by CNN-News18 about democratizing news production through Citizen Journalist were overstated. It made some effort to do this and broadcast a lot of important stories. But overall, in terms of bringing in citizen voices, it did not live up to its initial promise because the show was anchored in traditional journalistic norms and roles and the channel’s economic imperatives. Professional journalists were ironically the producers of 'citizen journalism' in this case. Mainstream media’s authority in defining journalistic work –who says what, where, when, why, and how– remains predominant in India. This has implications for democratic participation in India. The example of Citizen Journalist –the model it followed, its partial success, and many limitations– could well presage outcomes for other news outlets, in India and beyond, which copy its template.Keywords: citizen journalism, digital journalism, participatory journalism, public sphere
Procedia PDF Downloads 125841 Development and Verification of the Idom Shielding Optimization Tool
Authors: Omar Bouhassoun, Cristian Garrido, César Hueso
Abstract:
The radiation shielding design is an optimization problem with multiple -constrained- objective functions (radiation dose, weight, price, etc.) that depend on several parameters (material, thickness, position, etc.). The classical approach for shielding design consists of a brute force trial-and-error process subject to previous designer experience. Therefore, the result is an empirical solution but not optimal, which can degrade the overall performance of the shielding. In order to automate the shielding design procedure, the IDOM Shielding Optimization Tool (ISOT) has been developed. This software combines optimization algorithms with the capabilities to read/write input files, run calculations, as well as parse output files for different radiation transport codes. In the first stage, the software was established to adjust the input files for two well-known Monte Carlo codes (MCNP and Serpent) and optimize the result (weight, volume, price, dose rate) using multi-objective genetic algorithms. Nevertheless, its modular implementation easily allows the inclusion of more radiation transport codes and optimization algorithms. The work related to the development of ISOT and its verification on a simple 3D multi-layer shielding problem using both MCNP and Serpent will be presented. ISOT looks very promising for achieving an optimal solution to complex shielding problems.Keywords: optimization, shielding, nuclear, genetic algorithm
Procedia PDF Downloads 113840 Design Thinking and Requirements Engineering in Application Development: Case Studies in Brazil
Authors: V. Prodocimo, A. Malucelli, S. Reinehr
Abstract:
Organizations, driven by business digitization, have in software the main core of value generation and the main channel of communication with their clients. The software, as well as responding to momentary market needs, spans an extensive product family, ranging from mobile applications to multilateral platforms. Thus, the software specification needs to represent solutions focused on consumer problems and market needs. However, requirements engineering, whose approach is strongly linked to technology, becomes deficient and ineffective when the problem is not well defined or when looking for an innovative solution, thus needing a complementary approach. Research has cited the combination of design thinking and requirements engineering, many correlating design thinking as a support technique for the elicitation step, however, little is known about the real benefits and challenges that this combination can bring. From the point of view of the development process, there is little empirical evidence of how Design Thinking interactions with requirements engineering occur. Given this scenario, this paper aims to understand how design thinking practices are applied in each of the requirements engineering stages in software projects. To elucidate these interactions, a qualitative and exploratory research was carried out through the application of the case study method in IT organizations in Brazil that work in the development of software projects. The results indicate that design thinking has aided requirements engineering, both in projects that adopt agile methods and those that adopt the waterfall process, bringing a complementary thought that seeks to build the best software solution design for business problems. It was also possible to conclude that organizations choose to use design thinking not based on a specific software family (e.g. mobile or desktop applications), but given the characteristics of the software projects, such as: vague nature of the problem, complex problems and/or need for innovative solutions.Keywords: software engineering, requirements engineering, design thinking, innovative solutions
Procedia PDF Downloads 129839 Surface Pressure Distributions for a Forebody Using Pressure Sensitive Paint
Authors: Yi-Xuan Huang, Kung-Ming Chung, Ping-Han Chung
Abstract:
Pressure sensitive paint (PSP), which relies on the oxygen quenching of a luminescent molecule, is an optical technique used in wind-tunnel models. A full-field pressure pattern with low aerodynamic interference can be obtained, and it is becoming an alternative to pressure measurements using pressure taps. In this study, a polymer-ceramic PSP was used, using toluene as a solvent. The porous particle and polymer were silica gel (SiO₂) and RTV-118 (3g:7g), respectively. The compound was sprayed onto the model surface using a spray gun. The absorption and emission spectra for Ru(dpp) as a luminophore were respectively 441-467 nm and 597 nm. A Revox SLG-55 light source with a short-pass filter (550 nm) and a 14-bit CCD camera with a long-pass (600 nm) filter were used to illuminate PSP and to capture images. This study determines surface pressure patterns for a forebody of an AGARD B model in a compressible flow. Since there is no experimental data for surface pressure distributions available, numerical simulation is conducted using ANSYS Fluent. The lift and drag coefficients are calculated and in comparison with the data in the open literature. The experiments were conducted using a transonic wind tunnel at the Aerospace Science and Research Center, National Cheng Kung University. The freestream Mach numbers were 0.83, and the angle of attack ranged from -4 to 8 degree. Deviation between PSP and numerical simulation is within 5%. However, the effect of the setup of the light source should be taken into account to address the relative error.Keywords: pressure sensitive paint, forebody, surface pressure, compressible flow
Procedia PDF Downloads 130838 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator
Procedia PDF Downloads 256837 Poly (Diphenylamine-4-Sulfonic Acid) Modified Glassy Carbon Electrode for Voltammetric Determination of Gallic Acid in Honey and Peanut Samples
Authors: Zelalem Bitew, Adane Kassa, Beyene Misgan
Abstract:
In this study, a sensitive and selective voltammetric method based on poly(diphenylamine-4-sulfonic acid) modified glassy carbon electrode (poly(DPASA)/GCE) was developed for determination of gallic acid. Appearance of an irreversible oxidative peak at both bare GCE and poly(DPASA)/GCE for gallic acid with about three folds current enhancement and much reduced potential at poly(DPASA)/GCE showed catalytic property of the modifier towards oxidation of gallic acid. Under optimized conditions, Adsorptive stripping square wave voltammetric peak current response of the poly(DPASA)/GCE showed linear dependence with gallic acid concentration in the range 5.00 × 10-7 − 3.00 × 10-4 mol L-1 with limit of detection of 4.35 × 10-9. Spike recovery results between 94.62-99.63, 95.00-99.80 and 97.25-103.20% of gallic acid in honey, raw peanut, and commercial peanut butter samples respectively, interference recovery results with less than 4.11% error in the presence of uric acid and ascorbic acid, lower LOD and relatively wider dynamic range than most of the previously reported methods validated the potential applicability of the method based on poly(DPASA)/GCE for determination of gallic acid real samples including in honey and peanut samples.Keywords: gallic acid, diphenyl amine sulfonic acid, adsorptive anodic striping square wave voltammetry, honey, peanut
Procedia PDF Downloads 82836 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 191835 The Effect of the Side-Weir Crest Height to Scour in Clay-Sand Mixed Sediments
Authors: F. A. Saracoglu Varol, H. Agaccıoglu
Abstract:
Experimental studies to investigate the depth of the scour conducted at a side-weir intersection located at the 1800 curved flume which located Hydraulic Laboratory of Yıldız Technical University, Istanbul, Turkey. Side weirs were located at the middle of the straight part of the main channel. Three different lengths (25, 40 and 50 cm) and three different weir crest height (7, 10 and 12 cm) of the side weir placed on the side weir station. There is no scour when the material is only kaolin. Therefore, the cohesive bed was prepared by properly mixing clay material (kaolin) with 31% sand in all experiments. Following 24h consolidation time, in order to observe the effect of flow intensity on the scour depth, experiments were carried out for five different upstream Froude numbers in the range of 0.33-0.81. As a result of this study the relation between scour depth and upstream flow intensity as a function of time have been established. The longitudinal velocities decreased along the side weir; towards the downstream due to overflow over the side-weirs. At the beginning, the scour depth increases rapidly with time and then asymptotically approached constant values in all experiments for all side weir dimensions as in non-cohesive sediment. Thus, the scour depth reached equilibrium conditions. Time to equilibrium depends on the approach flow intensity and the dimensions of side weirs. For different heights of the weir crest, dimensionless scour depths increased with increasing upstream Froude number. Equilibrium scour depths which formed 7 cm side-weir crest height were obtained higher than that of the 12 cm side-weir crest height. This means when side-weir crest height increased equilibrium scour depths decreased. Although the upstream side of the scour hole is almost vertical, the downstream side of the hole is inclined.Keywords: clay-sand mixed sediments, scour, side weir, hydraulic structures
Procedia PDF Downloads 312834 Early Warning System of Financial Distress Based On Credit Cycle Index
Authors: Bi-Huei Tsai
Abstract:
Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightly-distressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models, are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the two-stage model incorporating financial ratios, corporate governance and market factors has the lowest misclassification error rate. The two-stage model is more accurate than the one-stage model as its distressed cut-off indicators are adjusted according to the macroeconomic-based credit cycle index.Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy
Procedia PDF Downloads 380833 Assessing the Spatial Distribution of Urban Parks Using Remote Sensing and Geographic Information Systems Techniques
Authors: Hira Jabbar, Tanzeel-Ur Rehman
Abstract:
Urban parks and open spaces play a significant role in improving physical and mental health of the citizens, strengthen the societies and make the cities more attractive places to live and work. As the world’s cities continue to grow, continuing to value green space in cities is vital but is also a challenge, particularly in developing countries where there is pressure for space, resources, and development. Offering equal opportunity of accessibility to parks is one of the important issues of park distribution. The distribution of parks should allow all inhabitants to have close proximity to their residence. Remote sensing and Geographic information systems (GIS) can provide decision makers with enormous opportunities to improve the planning and management of Park facilities. This study exhibits the capability of GIS and RS techniques to provide baseline knowledge about the distribution of parks, level of accessibility and to help in identification of potential areas for such facilities. For this purpose Landsat OLI imagery for year 2016 was acquired from USGS Earth Explorer. Preprocessing models were applied using Erdas Imagine 2014v for the atmospheric correction and NDVI model was developed and applied to quantify the land use/land cover classes including built up, barren land, water, and vegetation. The parks amongst total public green spaces were selected based on their signature in remote sensing image and distribution. Percentages of total green and parks green were calculated for each town of Lahore City and results were then synchronized with the recommended standards. ANGSt model was applied to calculate the accessibility from parks. Service area analysis was performed using Network Analyst tool. Serviceability of these parks has been evaluated by employing statistical indices like service area, service population and park area per capita. Findings of the study may contribute in helping the town planners for understanding the distribution of parks, demands for new parks and potential areas which are deprived of parks. The purpose of present study is to provide necessary information to planners, policy makers and scientific researchers in the process of decision making for the management and improvement of urban parks.Keywords: accessible natural green space standards (ANGSt), geographic information systems (GIS), remote sensing (RS), United States geological survey (USGS)
Procedia PDF Downloads 345832 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 146831 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images
Authors: Qiang Wang, Hongyang Yu
Abstract:
Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations
Procedia PDF Downloads 85830 Low-Complex, High-Fidelity Two-Grades Cyclo-Olefin Copolymer (COC) Based Thermal Bonding Technique for Sealing a Thermoplastic Microfluidic Biosensor
Authors: Jorge Prada, Christina Cordes, Carsten Harms, Walter Lang
Abstract:
The development of microfluidic-based biosensors over the last years has shown an increasing employ of thermoplastic polymers as constitutive material. Their low-cost production, high replication fidelity, biocompatibility and optical-mechanical properties are sought after for the implementation of disposable albeit functional lab-on-chip solutions. Among the range of thermoplastic materials on use, the Cyclo-Olefin Copolymer (COC) stands out due to its optical transparency, which makes it a frequent choice as manufacturing material for fluorescence-based biosensors. Moreover, several processing techniques to complete a closed COC microfluidic biosensor have been discussed in the literature. The reported techniques differ however in their implementation, and therefore potentially add more or less complexity when using it in a mass production process. This work introduces and reports results on the application of a purely thermal bonding process between COC substrates, which were produced by the hot-embossing process, and COC foils containing screen-printed circuits. The proposed procedure takes advantage of the transition temperature difference between two COC grades foils to accomplish the sealing of the microfluidic channels. Patterned heat injection to the COC foil through the COC substrate is applied, resulting in consistent channel geometry uniformity. Measurements on bond strength and bursting pressure are shown, suggesting that this purely thermal bonding process potentially renders a technique which can be easily adapted into the thermoplastic microfluidic chip production workflow, while enables a low-cost as well as high-quality COC biosensor manufacturing process.Keywords: biosensor, cyclo-olefin copolymer, hot embossing, thermal bonding, thermoplastics
Procedia PDF Downloads 240829 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine
Authors: Anas Rao, Hao Duan, Fanhua Ma
Abstract:
The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet
Procedia PDF Downloads 253828 Chat-Based Online Counseling for Enhancing Wellness of Undergraduates with Emotional Crisis Tendency
Authors: Arunya Tuicomepee
Abstract:
During the past two decades, there have been the increasing numbers of studies on online counseling, especially among adolescents who are familiar with the online world. This can be explained by the fact that via this channel enables easier access to the young, who may not be ready for face-to-face service, possibly due to uneasiness to reveal their personal problems with a stranger, the feeling that their problems are to be shamed, or the need to protect their images. Especially, the group of teenagers prone to suicide or despair, who tend to keep things to or isolate from the society to themselves, usually prefer types of services that require no face-to-face encounter and allow their anonymity, such as online services. This study aimed to examine effectiveness of chat-based online counseling for enhancing wellness of undergraduates with emotional crisis tendency. Experimental with pretest-posttest control group design was employed. Participants were 47 undergraduates (10 males and 37 females) with high emotional crisis tendency. They were randomly assigned to experimental group (24 students) and control group (23 students). Participants in the experimental group received a 60-minute, 4-sessions of individual chat-based online counseling led by counselor. Those in control group received no counseling session. Instruments were the Emotional Crisis Scale and Wellness Scales. Two-way mixed-design multivariate analysis of variance was used for data analysis. Finding revealed that the posttest scores on wellness of those in the experimental group were higher than the scores of those in the control group. The posttest scores on emotional crisis tendency of those in the experimental group were lower than the scores of those in the control group. Hence, this study suggests chat-based online counseling services can become a helping source that increasing more adolescents would recognize and turn to in the future and that will receive more attention.Keywords: chat-based online counseling, emotional crisis, undergraduate student, wellness
Procedia PDF Downloads 246827 Computational Screening of Secretory Proteins with Brain-Specific Expression in Glioblastoma Multiforme
Authors: Sumera, Sanila Amber, Fatima Javed Mirza, Amjad Ali, Saadia Zahid
Abstract:
Glioblastoma multiforme (GBM) is a widely spread and fatal primary brain tumor with an increased risk of relapse in spite of aggressive treatment. The current procedures for GBM diagnosis include invasive procedures i.e. resection or biopsy, to acquire tumor mass. Implementation of negligibly invasive tests as a potential diagnostic technique and biofluid-based monitoring of GBM stresses on discovering biomarkers in CSF and blood. Therefore, we performed a comprehensive in silico analysis to identify potential circulating biomarkers for GBM. Initially, six gene and protein databases were utilized to mine brain-specific proteins. The resulting proteins were filtered using a channel of five tools to predict the secretory proteins. Subsequently, the expression profile of the secreted proteins was verified in the brain and blood using two databases. Additional verification of the resulting proteins was done using Plasma Proteome Database (PPD) to confirm their presence in blood. The final set of proteins was searched in literature for their relationship with GBM, keeping a special emphasis on secretome proteome. 2145 proteins were firstly mined as brain-specific, out of which 69 proteins were identified as secretory in nature. Verification of expression profile in brain and blood eliminated 58 proteins from the 69 proteins, providing a final list of 11 proteins. Further verification of these 11 proteins further eliminated 2 proteins, giving a final set of nine secretory proteins i.e. OPCML, NPTX1, LGI1, CNTN2, LY6H, SLIT1, CREG2, GDF1 and SERPINI1. Out of these 9 proteins, 7 were found to be linked to GBM, whereas 2 proteins are not investigated in GBM so far. We propose that these secretory proteins can serve as potential circulating biomarker signatures of GBM and will facilitate the development of minimally invasive diagnostic methods and novel therapeutic interventions for GBM.Keywords: glioblastoma multiforme, secretory proteins, brain secretome, biomarkers
Procedia PDF Downloads 155826 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder
Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen
Abstract:
Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.Keywords: count data, meta-analytic prior, negative binomial, poisson
Procedia PDF Downloads 123825 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor
Authors: Panupong Makvichian
Abstract:
Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor
Procedia PDF Downloads 202824 The Effect of Hybrid SPD Process on Mechanical Properties, Drawability, and Plastic Anisotropy of DC03 Steel
Authors: Karolina Kowalczyk-Skoczylas
Abstract:
The hybrid SPD process called DRECE (Dual Rolls Equal Channel Extrusion) combines the concepts of ECAP method and CONFORM extrusion, and is intended for processing sheet-metal workpieces. The material in the fоrm оf a metal strip is subjected tо plastic defоrmation by passing thrоugh the shaping tоol at a given angle α. Importantly, in this process the dimensions of the metal strip dо nоt change after the pass is cоmpleted. Subsequent DRECE passes allоw fоr increasing the effective strain in the tested material. The methоd has a significant effect оn the micrоstructure and mechanical prоperties оf the strip. The experimental tests have been conducted on the unconventional DRECE device in VŠB Ostrava, the Czech Republic. The DC03 steel strips have been processed in several passes - up to six. Then, both Erichsen cupping tests as well as static tensile tests have been performed to evaluate the effect of DRECE process on drawability, plastic anisotropy and mechanical properties of the investigated steel. Both yield strength and ultimate tensile strength increase significantly after consecutive passes. Drawability decreases slightly after the first and second pass. Then it stabilizes on a reasonably high level, which means that the steel is characterized by useful drawability for technological processes. It was investigated in the material is characterized by a normal anisotropy. In the microstructure, an intensification of the development of microshear bands and their mutual intersection is observed, which leads to the fragmentation of the grain into smaller volumes and, consequently, to the formation of an ultrafine grained structure. "The project was co-financed by the European Union within the programme "The European Funds for Śląsk (Silesia) 2021-2027".Keywords: SPD process, low carbon steel, mechanical properties, plastic deformation, microstructure evolution
Procedia PDF Downloads 26823 Wrong Site Surgery Should Not Occur In This Day And Age!
Authors: C. Kuoh, C. Lucas, T. Lopes, I. Mechie, J. Yoong, W. Yoong
Abstract:
For all surgeons, there is one preventable but still highly occurring complication – wrong site surgeries. They can have potentially catastrophic, irreversible, or even fatal consequences on patients. With the exponential development of microsurgery and the use of advanced technological tools, the consequences of operating on the wrong side, anatomical part, or even person is seen as the most visible and destructive of all surgical errors and perhaps the error that is dreaded by most clinicians as it threatens their licenses and arouses feelings of guilt. Despite the implementation of the WHO surgical safety checklist more than a decade ago, the incidence of wrong-site surgeries remains relatively high, leading to tremendous physical and psychological repercussions for the clinicians involved, as well as a financial burden for the healthcare institution. In this presentation, the authors explore various factors which can lead to wrong site surgery – a combination of environmental and human factors and evaluate their impact amongst patients, practitioners, their families, and the medical industry. Major contributing factors to these “never events” include deviations from checklists, excessive workload, and poor communication. Two real-life cases are discussed, and systems that can be implemented to prevent these errors are highlighted alongside lessons learnt from other industries. The authors suggest that reinforcing speaking-up, implementing medical professional trainings, and higher patient’s involvements can potentially improve safety in surgeries and electrosurgeries.Keywords: wrong side surgery, never events, checklist, workload, communication
Procedia PDF Downloads 186822 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce
Authors: Jiao Sun, Li Pan, Shijun Liu
Abstract:
Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.Keywords: collaborative filtering, recommendation, data normalization, mapreduce
Procedia PDF Downloads 220821 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 450820 Development of Vacuum Planar Membrane Dehumidifier for Air-Conditioning
Authors: Chun-Han Li, Tien-Fu Yang, Chen-Yu Chen, Wei-Mon Yan
Abstract:
The conventional dehumidification method in air-conditioning system mostly utilizes a cooling coil to remove the moisture in the air via cooling the supply air down below its dew point temperature. During the process, it needs to reheat the supply air to meet the set indoor condition that consumes a considerable amount of energy and affect the coefficient of performance of the system. If the processes of dehumidification and cooling are separated and operated respectively, the indoor conditions will be more efficiently controlled. Therefore, decoupling the dehumidification and cooling processes in heating, ventilation and air conditioning system is one of the key technologies as membrane dehumidification processes for the next generation. The membrane dehumidification method has the advantages of low cost, low energy consumption, etc. It utilizes the pore size and hydrophilicity of the membrane to transfer water vapor by mass transfer effect. The moisture in the supply air is removed by the potential energy and driving force across the membrane. The process can save the latent load used to condense water, which makes more efficient energy use because it does not involve heat transfer effect. In this work, the performance measurements including the permeability and selectivity of water vapor and air with the composite and commercial membranes were conducted. According to measured data, we can choose the suitable dehumidification membrane for designing the flow channel length and components of the planar dehumidifier. The vacuum membrane dehumidification system was set up to examine the effects of temperature, humidity, vacuum pressure, flow rate, the coefficient of performance and other parameters on the dehumidification efficiency. The results showed that the commercial Nafion membrane has better water vapor permeability and selectivity. They are suitable for filtration with water vapor and air. Meanwhile, Nafion membrane has promising potential in the dehumidification process.Keywords: vacuum membrane dehumidification, planar membrane dehumidifier, water vapour and air permeability, air conditioning
Procedia PDF Downloads 149819 Oral Lichen Planus a Manifestation of Grinspan's Syndrome or a Lichenoid Reaction to Medication
Authors: Sahar Iqrar, Malik Adeel Anwar, Zain Akram, Maria Noor
Abstract:
Introduction: Oral lichen planus is a chronic inflammatory condition of unknown etiology. Oral lichen planus may be related with several other diseases. Grinspan's Syndrome is characterized by a triad of oral lichen planus, hypertension, and diabetes mellitus. Other associations reported in the literature are with chronic liver disease and, with dyslipidemia. The nature of these associations is still not fully understood. Material and methods: Study was conducted in Department of Oral Medicine, Fatima Memorial Hospital College of Medicine and Dentistry, Lahore, Pakistan. A total of n=89 clinically diagnosed patients of oral lichen planus of both gender and all age groups were recruited and detailed history were recorded in the designed performs. Results: A total of n=89 patients were taken with male to female ratio of 3:8 in which 24 were male and 65 females. Mean age was 48.8 ± 13.8 years. Age range of 10-74 years was seen. Among these patients suffering from oral lichen planus, 41.6% (n=37) had a positive history for hypertension with 59.5% (n=22) of these patients were taking different medication for their condition. Whereas Diabetes Mellitus was found in 24.7% (n=22) patients with 72.7% (n=16) of these patients using the hypoglycemic drug (oral or injectable) to control their blood glucose levels. Out of these n=89 lichen planus patients 21.3% had both hypertension and diabetes mellitus (fulfilling the criteria for Grinspan's Syndrome). Out of this Grinspan's Syndrome pool 94.7% (n=19) were taking drug atleast for one of the two conditions. Conclusion: As noticed form the medical history of the patients, most of them were using hypoglycemic drugs for diabetes mellitus and beta blockers, diuretics and calcium channel blockers for hypertension. These drugs are known for lichenoid reaction. Therefore, it should be ruled out at histopathological/ immunological and molecular level whether these patients are suffering from lichen planus or lichenoid drug reaction to truly declare them as patients with Grinspan’s Syndrome.Keywords: diabetes mellitus, grinspan's syndrome, lichenoid drug reaction, oral lichen planus
Procedia PDF Downloads 245818 The Hair Growth Effects of Undariopsis peterseniana
Authors: Jung-Il Kang, Jeon Eon Park, Yu-Jin Moon, Young-Seok Ahn, Eun-Sook Yoo, Hee-Kyoung Kang
Abstract:
This study was conducted to evaluate the effect of Undariopsis peterseniana, a seaweed native to Jeju Island, Korea, on the growth of hair. The dermal papilla cells (DPCs) have known to regulate hair growth cycle and length of hair follicle through interact with epithelial cells. When immortalized vibrissa DPCs were treated with the U. peterseniana extract, the U. peterseniana extract significantly increased the proliferation of DPCs. The effect of U. peterseniana extract on the growth of vibrissa follicles was also examined. U. peterseniana extract significantly increased the hair-fiber lengths of the vibrissa follicles. Hair loss is partly caused by dihydrotestosterone (DHT) binding to androgen receptor in hair follicles, and the inhibition of 5α-reductase activity can prevent hair loss through the decrease of DHT level. The U. peterseniana extract inhibited 5α-reductase activity. Minoxidil, a potent hair-growth agent, can induce proliferation in NIH3T3 fibroblasts by opening KATP channels. We thus examined the proliferative effects of U. peterseniana extract in NIH3T3 fibroblasts. U. peterseniana extract significantly increased the proliferation of NIH3T3 fibroblasts. Tetraethylammonium chloride (TEA), a K+ channel blocker, inhibited U. peterseniana-induced proliferation in NIH3T3 fibroblasts. These results suggest that U. peterseniana could have the potential to treat alopecia through the proliferation of DPCs, the inhibition of 5α-reductase activity and the opening of KATP channels. [Acknowledgement] This research was supported by The Leading Human Resource Training Program of Regional Neo industry through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and future Planning (2016H1D5A1908786).Keywords: hair growth, Undariopsis peterseniana, vibrissa follicles, dermal papilla cells, 5α-reductase, KATP channels
Procedia PDF Downloads 301817 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data
Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda
Abstract:
Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation
Procedia PDF Downloads 301816 Online Language Tandem: Focusing on Intercultural Communication Competence and Non-Verbal Cues
Authors: Amira Benabdelkader
Abstract:
Communication presents the channel by which humankind create and maintain their relationship with others, express themselves, exchange information, learn and teach etc. The context of communication plays a distinctive role in deciding about the language to be used. The term context is mainly used to refer to the interlocutors, their cultures, languages, relationship, physical surrounding that is the communication setting, type of the information to be transmitted, the topic etc. Cultures, on one hand, impose on humans certain behaviours, attitudes, gestures and beliefs. On the other hand, the focus on language is inevitable as it is with its verbal and non-verbal components, a key tool in and for communication. Moreover, each language has its particularity in how people voice, address and express their thoughts, feelings and beliefs. Being in the same setting with people from different cultures and languages and having conversations with them would call upon the intercultural communicative competence. This latter would promote the success of their conversations. Additionally, this competence could manifest in several ways during their interactions, to the extent that no one can predict when and how the interlocutors would use it. The only thing probably that could be confirmed is that the setting and culture would in a way or another intervene and often shape the flow of their communication, if not the whole communication. Therefore, this paper will look at the intercultural communicative competence of language learners when introducing their cultures to each other in an online language tandem (henceforth OLT) using their second and/or foreign language with the L1 language speakers. The participants of this study are Algerian (use L2: French, FL: English), British (L1: English, L2/FL: French). In other words, this current paper will provide a qualitative analysis of the OLT experiment by emphasising how language learners can overcome the cultural differences in an intercultural setting while communicating online using Skype (video conversations) with people from different countries, cultures and L1. The non-verbal cues will have the lion share in the analysis by focusing on how they have been used to maintain this intercultural communication or hinder it through the misinterpretation of gestures, head movements, grimaces etc.Keywords: intercultural communicative competence, non-verbal cues, online language tandem, Skype
Procedia PDF Downloads 284815 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor
Authors: Ejaz Ahmed, Huang Yong
Abstract:
The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.Keywords: CFD, combustion, gas turbine combustor, lean blowout
Procedia PDF Downloads 269