Search results for: Average Run Length.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2256

Search results for: Average Run Length.

156 Economics of Open and Distance Education in the University of Ibadan, Nigeria

Authors: Babatunde Kasim Oladele

Abstract:

One of the major objectives of the Nigeria national policy on education is the provision of equal educational opportunities to all citizens at different levels of education. With regards to higher education, an aspect of the policy encourages distance learning to be organized and delivered by tertiary institutions in Nigeria. This study therefore, determines how much of the Government resources are committed, how the resources are utilized and what alternative sources of funding are available for this system of education. This study investigated the trends in recurrent costs between 2004/2005 and 2013/2014 at University of Ibadan Distance Learning Centre (DLC). A descriptive survey research design was employed for the study. Questionnaire was the research instrument used for the collection of data. The population of the study was 280 current distance learning education students, 70 academic staff and 50 administrative staff. Only 354 questionnaires were correctly filled and returned. Data collected were analyzed and coded using the frequencies, ratio, average and percentages were used to answer all the research questions. The study revealed that staff salaries and allowances of academic and non-academic staff represent the most important variable that influences the cost of education. About 55% of resources were allocated to this sector alone. The study also indicates that costs rise every year with increase in enrolment representing a situation of diseconomies of scale. This study recommends that Universities who operates distance learning program should strive to explore other internally generated revenue option to boost their revenue. University of Ibadan, being the premier university in Nigeria, should be given foreign aid and home support, both financially and materially, to enable the institute to run a formidable distance education program that would measure up in planning and implementation with those of developed nation.

Keywords: Open education, distance education, University of Ibadan, cost of education, Nigeria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 936
155 Holistic Approach to Assess the Potential of Using Traditional and Advance Insulation Materials for Energy Retrofit of Office Buildings

Authors: Marco Picco, Mahmood Alam

Abstract:

Improving the energy performance of existing buildings can be challenging, particularly when facades cannot be modified, and the only available option is internal insulation. In such cases, the choice of the most suitable material becomes increasingly complex, as in addition to thermal transmittance and capital cost, the designer needs to account for the impact of the intervention on the internal spaces, and in particular the loss of usable space due to the additional layers of materials installed. This paper explores this issue by analyzing a case study of an average office building needing to go through a refurbishment in order to reach the limits imposed by current regulations to achieve energy efficiency in buildings. The building is simulated through dynamic performance simulation under three different climate conditions in order to evaluate its energy needs. The use of Vacuum Insulated Panels as an option for energy refurbishment is compared to traditional insulation materials (XPS, Mineral Wool). For each scenario, energy consumptions are calculated and, in combination with their expected capital costs, used to perform a financial feasibility analysis. A holistic approach is proposed, taking into account the impact of the intervention on internal space by quantifying the value of the lost usable space and used in the financial feasibility analysis. The proposed approach highlights how taking into account different drivers will lead to the choice of different insulation materials, showing how accounting for the economic value of space can make VIPs an attractive solution for energy retrofitting under various climate conditions.

Keywords: Vacuum insulated panels, building performance simulation, payback period, building energy retrofit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 522
154 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 561
153 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

Authors: B. Golchin, N. Riahi

Abstract:

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Keywords: emotion classification, sentiment analysis, social networks, deep neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665
152 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 740
151 Development and Validation of Employee Trust Scale: Factor Structure, Reliability and Validity

Authors: Chua Bee Seok, Getrude Cosmas, Jasmine Adela Mutang, Shazia Iqbal Hashmi

Abstract:

The aim of this study was to determine the factor structure and psychometric properties (i.e., reliability and convergent validity) of the Employee Trust Scale, a newly created instrument by the researchers. The Employee Trust Scale initially contained 82 items to measure employees’ trust toward their supervisors. A sample of 818 (343 females, 449 males) employees were selected randomly from public and private organization sectors in Kota Kinabalu, Sabah, Malaysia. Their ages ranged from 19 to 67 years old with a mean of 34.55 years old. Their average tenure with their current employer was 11.2 years (s.d. = 7.5 years). The respondents were asked to complete the Employee Trust Scale, as well as a managerial trust questionnaire from Mishra. The exploratory factor analysis on employees’ trust toward their supervisor’s extracted three factors, labeled ‘trustworthiness’ (32 items), ‘position status’ (11 items) and ‘relationship’ (6 items) which accounted for 62.49% of the total variance. Trustworthiness factors were re-categorized into three sub factors: competency (11 items), benevolence (8 items) and integrity (13 items). All factors and sub factors of the scales demonstrated clear reliability with internal consistency of Cronbach’s Alpha above .85. The convergent validity of the Scale was supported by an expected pattern of correlations (positive and significant correlation) between the score of all factors and sub factors of the scale and the score on the managerial trust questionnaire, which measured the same construct. The convergent validity of Employee Trust Scale was further supported by the significant and positive inter-correlation between the factors and sub factors of the scale. The results suggest that the Employee Trust Scale is a reliable and valid measure. However, further studies need to be carried out in other groups of sample as to further validate the Scale.

Keywords: Employees trust scale, position status, psychometric properties, relationship, trustworthiness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3302
150 Development of a Basic Robot System for Medical and Nursing Care for Patients with Glaucoma

Authors: Naoto Suzuki

Abstract:

Medical methods to completely treat glaucoma are yet to be developed. Therefore, ophthalmologists manage patients mainly to delay disease progression. Patients with glaucoma are mainly elderly individuals. In elderly people's houses, having an equipment that can provide medical treatment and care can release their family from their care. For elderly people with the glaucoma to live by themselves as much as possible, we developed a support robot having five functions: elderly people care, ophthalmological examination, trip assistance to the neighborhood, medical treatment, and data referral to a hospital. The medical and nursing care robot should approach the visual field that the patients can see at a speed suitable for their eyesight. This is because the robot will be dangerous if it approaches the patients from the visual field that they cannot see. We experimentally developed a robot that brings a white cane to elderly people with glaucoma. The base part of the robot is a carriage, which is a Megarover 1.1, and it has two infrared sensors. The robot moves along a white line on the floor using the infrared sensors and has a special arm, which does not use electricity. The arm can scoop the block attached to the white cane. Next, we also developed a direction detector comprised of a charge-coupled device camera (SVR41ResucueHD; Sun Mechatronics), goggles (MG-277MLF; Midori Anzen Co. Ltd.), and biconvex lenses with a focal length of 25 mm (Edmund Co.). Some young people were photographed using the direction detector, which was put on their faces. Image processing was performed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2. To measure the people's line of vision, we calculated the iris's center of gravity using five processes: reduction, trimming, binarization or gray scale, edge extraction, and Hough transform. We compared the binarization and gray scale processes in image processing. The binarization process was better than the gray scale process. For edge extraction, we compared five methods: Sobel, Prewitt, Laplacian of Gaussian, fast Fourier transform, and Canny. The Canny method was the optimal extraction method. We performed the Hough transform to search for the main coordinates from the iris's edge, and we found that the Hough transform could calculate the center point of the iris.

Keywords: Glaucoma, support robot, elderly people, Hough transform, direction detector, line of vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547
149 Physicochemical Characteristics and Usage Possibilities of Elbasan Thermal Water

Authors: Elvin Çomo, Edlira Tako, Albana Hasimi, Rrapo Ormeni, Olger Gjuzi, Mirela Ndrita

Abstract:

In Albania, only low-enthalpy geothermal springs and wells are known, the temperatures of some of them are almost at the upper limits of low enthalpy, reaching over 60 °C. These resources can be used to improve the country's energy balance, as well as for profitable economic purposes. The region of Elbasan has the greatest geothermal energy potential in Albania. This basin is one of the most known and most used thermal springs in our country. This area is a surface with a number of sources, located in the form of a chain, in the sector between Llixha and Hidraj and constitutes a thermo-mineral basin with stable discharge and high temperature. The sources of Elbasan Springs, with the current average flow of thermo mineral water of 12-18 l/s and its temperature 55-65 oC, have specific reserves of 39.6 GJ/m2 and potential power to install 2760 kW potential power. For the assessment of physicochemical parameters and heavy metals, water samples were taken at 5 monitoring stations throughout 2022. The levels of basic parameters were analyzed using ISO, EU and APHA standard methods. This study presents the current state of the physicochemical parameters of this thermal basin, the evaluation of these parameters for curative activities and for industrial processes, as well as the integrated utilization of geothermal energy. Thermomineral waters can be utilized for heating homes in the surrounding area or further, depending on the flow from the source or geothermal well. There is awareness among Albanian investors, medical researchers, and the community about the high economic and therapeutic efficiency of the integrated use of geothermal energy in the region and the development of the tourism sector. An analysis of the negative environmental impact from the use of thermal water is also provided.

Keywords: Geothermal energy, Llixha, physicochemical parameters, thermal water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159
148 Fractal Analysis of 16S rRNA Gene Sequences in Archaea Thermophiles

Authors: T. Holden, G. Tremberger, Jr, E. Cheung, R. Subramaniam, R. Sullivan, N. Gadura, P. Schneider, P. Marchese, A. Flamholz, T. Cheung, D. Lieberman

Abstract:

A nucleotide sequence can be expressed as a numerical sequence when each nucleotide is assigned its proton number. A resulting gene numerical sequence can be investigated for its fractal dimension in terms of evolution and chemical properties for comparative studies. We have investigated such nucleotide fluctuation in the 16S rRNA gene of archaea thermophiles. The studied archaea thermophiles were archaeoglobus fulgidus, methanothermobacter thermautotrophicus, methanocaldococcus jannaschii, pyrococcus horikoshii, and thermoplasma acidophilum. The studied five archaea-euryarchaeota thermophiles have fractal dimension values ranging from 1.93 to 1.97. Computer simulation shows that random sequences would have an average of about 2 with a standard deviation about 0.015. The fractal dimension was found to correlate (negative correlation) with the thermophile-s optimal growth temperature with R2 value of 0.90 (N =5). The inclusion of two aracheae-crenarchaeota thermophiles reduces the R2 value to 0.66 (N = 7). Further inclusion of two bacterial thermophiles reduces the R2 value to 0.50 (N =9). The fractal dimension is correlated (positive) to the sequence GC content with an R2 value of 0.89 for the five archaea-euryarchaeota thermophiles (and 0.74 for the entire set of N = 9), although computer simulation shows little correlation. The highest correlation (positive) was found to be between the fractal dimension and di-nucleotide Shannon entropy. However Shannon entropy and sequence GC content were observed to correlate with optimal growth temperature having an R2 of 0.8 (negative), and 0.88 (positive), respectively, for the entire set of 9 thermophiles; thus the correlation lacks species specificity. Together with another correlation study of bacterial radiation dosage with RecA repair gene sequence fractal dimension, it is postulated that fractal dimension analysis is a sensitive tool for studying the relationship between genotype and phenotype among closely related sequences.

Keywords: Fractal dimension, archaea thermophiles, Shannon entropy, GC content

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1779
147 Study of Equilibrium and Mass Transfer of Co- Extraction of Different Mineral Acids with Iron(III) from Aqueous Solution by Tri-n-Butyl Phosphate Using Liquid Membrane

Authors: Diptendu Das, Vikas Kumar Rahi, V. A. Juvekar, R. Bhattacharya

Abstract:

Extraction of Fe(III) from aqueous solution using Trin- butyl Phosphate (TBP) as carrier needs a highly acidic medium (>6N) as it favours formation of chelating complex FeCl3.TBP. Similarly, stripping of Iron(III) from loaded organic solvents requires neutral pH or alkaline medium to dissociate the same complex. It is observed that TBP co-extracts acids along with metal, which causes reversal of driving force of extraction and iron(III) is re-extracted back from the strip phase into the feed phase during Liquid Emulsion Membrane (LEM) pertraction. Therefore, rate of extraction of different mineral acids (HCl, HNO3, H2SO4) using TBP with and without presence of metal Fe(III) was examined. It is revealed that in presence of metal acid extraction is enhanced. Determination of mass transfer coefficient of both acid and metal extraction was performed by using Bulk Liquid Membrane (BLM). The average mass transfer coefficient was obtained by fitting the derived model equation with experimentally obtained data. The mass transfer coefficient of the mineral acid extraction is in the order of kHNO3 = 3.3x10-6m/s > kHCl = 6.05x10-7m/s > kH2SO4 = 1.85x10-7m/s. The distribution equilibria of the above mentioned acids between aqueous feed solution and a solution of tri-n-butyl-phosphate (TBP) in organic solvents have been investigated. The stoichiometry of acid extraction reveals the formation of TBP.2HCl, HNO3.2TBP, and TBP.H2SO4 complexes. Moreover, extraction of Iron(III) by TBP in HCl aqueous solution forms complex FeCl3.TBP.2HCl while in HNO3 medium forms complex 3FeCl3.TBP.2HNO3

Keywords: Bulk Liquid Membrane (BLM) Transport, Iron(III) extraction, Tri-n-butyl Phosphate, Mass Transfer coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588
146 Changing Geomorphosites in a Changing Lake: How Environmental Changes in Urmia Lake Have Been Driving Vanishing or Creating of Geomorphosites

Authors: D. Mokhtari

Abstract:

Any variation in environmental characteristics of geomorphosites would lead to destabilisation of their geotouristic values all around the planet. The Urmia lake, with an area of approximately 5,500 km2 and a catchment area of 51,876 km2, and to which various reasons over time, especially in the last fifty years have seen a sharp decline and have decreased by about 93 % in two recent decades. These variations are not only driving significant changes in the morphology and ecology of the present lake landscape, but at the same time are shaping newly formed morphologies, which vanished some valuable geomorphosites or develop into smaller geomorphosites with significant value from a scientific and cultural point of view. This paper analyses and discusses features and evolution in several representative coastal and island geomorphosites. For this purpose, a total of 23 geomorphosites were studied in two data series (1963 and 2015) and the respective data were compared and analysed. The results showed, the total loss in geomorphosites area in a half century amounted to a loss of more than 90% of the valuable geomorphosites. Moreover, the comparison between the mean yearly value of coastal area lost over the entire period and the yearly average calculated for the shorter period (1998- 2014) clearly indicates a pattern of acceleration. This acceleration in the rate of reduction in lake area was seen in most of the southern half of the lake. In the region as well, the general water-level falling is not only causing the loss of a significant water resource, which is followed by major impact on regional ecosystems, but is also driving the most marked recent (last century) changes in the geotouristic landscapes. In fact, the disappearance of geomorphosites means the loss of tourism phenomenon. In this context attention must be paid to the question of conservation. The action needed to safeguard geomorphosites includes: 1) Preventive action, 2) Corrective action, and 3) Sharing knowledge.

Keywords: Changing lake, environmental changes, geomorphosite, northwest of Iran, Urmia lake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
145 A Comparative Study of Indoor Radon Concentrations between Dwellings and Workplaces in the Ko Samui District, Surat Thani Province, Southern Thailand

Authors: Kanokkan Titipornpun, Tripob Bhongsuwan, Jan Gimsa

Abstract:

The Ko Samui district of Surat Thani province is located in the high amounts of equivalent uranium in the ground surface that is the source of radon. Our research in the Ko Samui district aimed at comparing the indoor radon concentrations between dwellings and workplaces. Measurements of indoor radon concentrations were carried out in 46 dwellings and 127 workplaces, using CR-39 alpha-track detectors in closed-cup. A total of 173 detectors were distributed in 7 sub-districts. The detectors were placed in bedrooms of dwellings and workrooms of workplaces. All detectors were exposed to airborne radon for 90 days. After exposure, the alpha tracks were made visible by chemical etching before they were manually counted under an optical microscope. The track densities were assumed to be correlated with the radon concentration levels. We found that the radon concentrations could be well described by a log-normal distribution. Most concentrations (37%) were found in the range between 16 and 30 Bq.m-3. The radon concentrations in dwellings and workplaces varied from a minimum of 11 Bq.m-3 to a maximum of 305 Bq.m-3. The minimum (11 Bq.m-3) and maximum (305 Bq.m-3) values of indoor radon concentrations were found in a workplace and a dwelling, respectively. Only for four samples (3%), the indoor radon concentrations were found to be higher than the reference level recommended by the WHO (100 Bq.m-3). The overall geometric mean in the surveyed area was 32.6±1.65 Bq.m-3, which was lower than the worldwide average (39 Bq.m-3). The statistic comparison of the geometric mean indoor radon concentrations between dwellings and workplaces showed that the geometric mean in dwellings (46.0±1.55 Bq.m-3) was significantly higher than in workplaces (28.8±1.58 Bq.m-3) at the 0.05 level. Moreover, our study found that the majority of the bedrooms in dwellings had a closed atmosphere, resulting in poorer ventilation than in most of the workplaces that had access to air flow through open doors and windows at daytime. We consider this to be the main reason for the higher geometric mean indoor radon concentration in dwellings compared to workplaces.

Keywords: CR-39 detector, indoor radon, radon in dwelling, radon in workplace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
144 Systematic Analysis of Dynamic Association of Health Outcomes with Computer Usage for Office Staff

Authors: Xiaoshu Lu, Esa-Pekka Takala, Risto Toivonen

Abstract:

This paper systematically investigates the timedependent health outcomes for office staff during computer work using the developed mathematical model. The model describes timedependent health outcomes in multiple body regions associated with computer usage. The association is explicitly presented with a doseresponse relationship which is parametrized by body region parameters. Using the developed model we perform extensive investigations of the health outcomes statically and dynamically. We compare the risk body regions and provide various severity rankings of the discomfort rate changes with respect to computer-related workload dynamically for the study population. Application of the developed model reveals a wide range of findings. Such broad spectrum of investigations in a single report literature is lacking. Based upon the model analysis, it is discovered that the highest average severity level of the discomfort exists in neck, shoulder, eyes, shoulder joint/upper arm, upper back, low back and head etc. The biggest weekly changes of discomfort rates are in eyes, neck, head, shoulder, shoulder joint/upper arm and upper back etc. The fastest discomfort rate is found in neck, followed by shoulder, eyes, head, shoulder joint/upper arm and upper back etc. Most of our findings are consistent with the literature, which demonstrates that the developed model and results are applicable and valuable and can be utilized to assess correlation between the amount of computer-related workload and health risk.

Keywords: Computer-related workload, health outcomes, dynamic association, dose-response relationship, systematic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
143 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication

Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.

Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1070
142 Evaluation of the Analytic for Hemodynamic Instability as A Prediction Tool for Early Identification of Patient Deterioration

Authors: Bryce Benson, Sooin Lee, Ashwin Belle

Abstract:

Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.

Keywords: Clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450
141 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures

Authors: Tomoko Fukuyama, Osamu Senbu

Abstract:

Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.

Keywords: Prestressed concrete, electric charge, impedance, phase shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 721
140 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, Opinion detection, SentiWordNet, trust score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
139 Insights into Smoothies with High Levels of Fibre and Polyphenols: Factors Influencing Chemical, Rheological and Sensory Properties

Authors: Dongxiao Sun-Waterhouse, Shiji Nair, Reginald Wibisono, Sandhya S. Wadhwa, Carl Massarotto, Duncan I. Hedderley, Jing Zhou, Sara R. Jaeger, Virginia Corrigan

Abstract:

Attempts to add fibre and polyphenols (PPs) into popular beverages present challenges related to the properties of finished products such as smoothies. Consumer acceptability, viscosity and phenolic composition of smoothies containing high levels of fruit fibre (2.5-7.5 g per 300 mL serve) and PPs (250-750 mg per 300 mL serve) were examined. The changes in total extractable PP, vitamin C content, and colour of selected smoothies over a storage stability trial (4°C, 14 days) were compared. A set of acidic aqueous model beverages were prepared to further examine the effect of two different heat treatments on the stability and extractability of PPs. Results show that overall consumer acceptability of high fibre and PP smoothies was low, with average hedonic scores ranging from 3.9 to 6.4 (on a 1-9 scale). Flavour, texture and overall acceptability decreased as fibre and polyphenol contents increased, with fibre content exerting a stronger effect. Higher fibre content resulted in greater viscosity, with an elevated PP content increasing viscosity only slightly. The presence of fibre also aided the stability and extractability of PPs after heating. A reduction of extractable PPs, vitamin C content and colour intensity of smoothies was observed after a 14-day storage period at 4°C. Two heat treatments (75°C for 45 min or 85°C for 1 min) that are normally used for beverage production, did not cause significant reduction of total extracted PPs. It is clear that high levels of added fibre and PPs greatly influence the consumer appeal of smoothies, suggesting the need to develop novel formulation and processing methods if a satisfactory functional beverage is to be developed incorporating these ingredients.

Keywords: Apple fibre, apple and blackcurrant polyphenols, consumer acceptability, functional foods, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4336
138 Automated Method Time Measurement System for Redesigning Dynamic Facility Layout

Authors: Salam Alzubaidi, G. Fantoni, F. Failli, M. Frosolini

Abstract:

The dynamic facility layout problem is a really critical issue in the competitive industrial market; thus, solving this problem requires robust design and effective simulation systems. The sustainable simulation requires inputting reliable and accurate data into the system. So this paper describes an automated system integrated into the real environment to measure the duration of the material handling operations, collect the data in real-time, and determine the variances between the actual and estimated time schedule of the operations in order to update the simulation software and redesign the facility layout periodically. The automated method- time measurement system collects the real data through using Radio Frequency-Identification (RFID) and Internet of Things (IoT) technologies. Hence, attaching RFID- antenna reader and RFID tags enables the system to identify the location of the objects and gathering the time data. The real duration gathered will be manipulated by calculating the moving average duration of the material handling operations, choosing the shortest material handling path, and then updating the simulation software to redesign the facility layout accommodating with the shortest/real operation schedule. The periodic simulation in real-time is more sustainable and reliable than the simulation system relying on an analysis of historical data. The case study of this methodology is in cooperation with a workshop team for producing mechanical parts. Although there are some technical limitations, this methodology is promising, and it can be significantly useful in the redesigning of the manufacturing layout.

Keywords: Dynamic facility layout problem, internet of things, method time measurement, radio frequency identification, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 599
137 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
136 Plasma Arc Burner for Pulverized Coal Combustion

Authors: Gela Gelashvili, David Gelenidze, Sulkhan Nanobashvili, Irakli Nanobashvili, George Tavkhelidze, Tsiuri Sitchinava

Abstract:

Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.

Keywords: Coal combustion, plasma arc, plasma torches, pulverized coal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
135 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer. 

Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
134 Oil-Water Two-Phase Flow Characteristics in Horizontal Pipeline – A Comprehensive CFD Study

Authors: Anand B. Desamala, Ashok Kumar Dasamahapatra, Tapas K. Mandal

Abstract:

In the present work, detailed analysis on flow characteristics of a pair of immiscible liquids through horizontal pipeline is simulated by using ANSYS FLUENT 6.2. Moderately viscous oil and water (viscosity ratio = 107, density ratio = 0.89 and interfacial tension = 0.024 N/m) have been taken as system fluids for the study. Volume of Fluid (VOF) method has been employed by assuming unsteady flow, immiscible liquid pair, constant liquid properties, and co-axial flow. Meshing has been done using GAMBIT. Quadrilateral mesh type has been chosen to account for the surface tension effect more accurately. From the grid independent study, we have selected 47037 number of mesh elements for the entire geometry. Simulation successfully predicts slug, stratified wavy, stratified mixed and annular flow, except dispersion of oil in water, and dispersion of water in oil. Simulation results are validated with horizontal literature data and good conformity is observed. Subsequently, we have simulated the hydrodynamics (viz., velocity profile, area average pressure across a cross section and volume fraction profile along the radius) of stratified wavy and annular flow at different phase velocities. The simulation results show that in the annular flow, total pressure of the mixture decreases with increase in oil velocity due to the fact that pipe cross section is completely wetted with water. Simulated oil volume fraction shows maximum at the centre in core annular flow, whereas, in stratified flow, maximum value appears at upper side of the pipeline. These results are in accord with the actual flow configuration. Our findings could be useful in designing pipeline for transportation of crude oil.

Keywords: CFD, Horizontal pipeline, Oil-water flow, VOF technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5711
133 Biogas from Cover Crops and Field Residues: Effects on Soil, Water, Climate and Ecological Footprint

Authors: Manfred Szerencsits, Christine Weinberger, Maximilian Kuderna, Franz Feichtinger, Eva Erhart, Stephan Maier

Abstract:

Cover or catch crops have beneficial effects for soil, water, erosion, etc. If harvested, they also provide feedstock for biogas without competition for arable land in regions, where only one main crop can be produced per year. On average gross energy yields of approx. 1300 m³ methane (CH4) ha-1 can be expected from 4.5 tonnes (t) of cover crop dry matter (DM) in Austria. Considering the total energy invested from cultivation to compression for biofuel use a net energy yield of about 1000 m³ CH4 ha-1 is remaining. With the straw of grain maize or Corn Cob Mix (CCM) similar energy yields can be achieved. In comparison to catch crops remaining on the field as green manure or to complete fallow between main crops the effects on soil, water and climate can be improved if cover crops are harvested without soil compaction and digestate is returned to the field in an amount equivalent to cover crop removal. In this way, the risk of nitrate leaching can be reduced approx. by 25% in comparison to full fallow. The risk of nitrous oxide emissions may be reduced up to 50% by contrast with cover crops serving as green manure. The effects on humus content and erosion are similar or better than those of cover crops used as green manure when the same amount of biomass was produced. With higher biomass production the positive effects increase even if cover crops are harvested and the only digestate is brought back to the fields. The ecological footprint of arable farming can be reduced by approx. 50% considering the substitution of natural gas with CH4 produced from cover crops.

Keywords: Biogas, cover crops, catch crops, land use competition, sustainable agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316
132 Effect of a Gravel Bed Flocculator on the Efficiency of a Low Cost Water Treatment Plants

Authors: Alaa Hussein Wadi

Abstract:

The principal objective of a water treatment plant is to produce water that satisfies a set of drinking water quality standards at a reasonable price to the consumers. The gravel-bed flocculator provide a simple and inexpensive design for flocculation in small water treatment plants (less than 5000 m3/day capacity). The packed bed of gravel provides ideal conditions for the formation of compact settleable flocs because of continuous recontact provided by the sinuous flow of water through the interstices formed by the gravel. The field data which were obtained from the operation of the water supply treatment unit cover the physical, chemical and biological water qualities of the raw and settled water as obtained by the operation of the treatment unit. The experiments were carried out with the aim of assessing the efficiency of the gravel filter in removing the turbidity, pathogenic bacteria, from the raw water. The water treatment plant, which was constructed for the treatment of river water, was in principle a rapid sand filter. The results show that the average value of the turbidity level of the settled water was 4.83 NTU with a standard deviation of turbidity 2.893 NTU. This indicated that the removal efficiency of the sedimentation tank (gravel filter) was about 67.8 %. for pH values fluctuated between 7.75 and 8.15, indicating the alkaline nature of the raw water of the river Shatt Al-Hilla, as expected. Raw water pH is depressed slightly following alum coagulation. The pH of the settled water ranged from 7.75 to a maximum of 8.05. The bacteriological tests which were carried out on the water samples were: total coliform test, E-coli test, and the plate count test. In each test the procedure used was as outlined in the Standard Methods for the Examination of Water and Wastewater (APHA, AWWA, and WPCF, 1985). The gravel filter exhibit a low performance in removing bacterial load. The percentage bacterial removal, which is maximum for total plate count (19%) and minimum for total coliform (16.82%).

Keywords: Gravel bed flocculator, turbidity, total coliform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
131 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: Lexicon of disasters, modelling, Petri nets, text annotation, social disasters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1157
130 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420
129 Students’ Level of Participation, Critical Thinking, Types of Action and Influencing Factors in Online Forum Environment

Authors: N. I. Bazid, I. N. Umar

Abstract:

Due to the advancement of Internet technology, online learning is widely used in higher education institutions. Online learning offers several means of communication, including online forum. Through online forum, students and instructors are able to discuss and share their knowledge and expertise without having a need to attend the face-to-face, ordinary classroom session. The purposes of this study are to analyze the students’ levels of participation and critical thinking, types of action and factors influencing their participation in online forum. A total of 41 postgraduate students undertaking a course in educational technology from a public university in Malaysia were involved in this study. In this course, the students participated in a weekly online forum as part of the course requirement. Based on the log data file extracted from the online forum, the students’ type of actions (view, add, update, delete posts) and their levels of participation (passive, moderate or active) were identified. In addition, the messages posted in the forum were analyzed to gauge their level of critical thinking. Meanwhile, the factors that might influence their online forum participation were measured using a 24-items questionnaire. Based on the log data, a total of 105 posts were sent by the participants. In addition, the findings show that (i) majority of the students are moderate participants, with an average of two to three posts per person, (ii) viewing posts are the most frequent type of action (85.1%), and followed by adding post (9.7%). Furthermore, based on the posts they made, the most frequent type of critical thinking observed was justification (50 input or 19.0%), followed by linking ideas and interpretation (47 input or 18%), and novelty (38 input or 14.4%). The findings indicate that online forum allows for social interaction and can be used to measure the students’ critical thinking skills. In order to achieve this, monitoring students’ activities in the online forum is recommended.

Keywords: Critical thinking, learning management system, level of online participation, online forum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2275
128 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
127 Hydro-Geochemistry of Qare-Sou Catchment and Gorgan Gulf, Iran: Examining Spatial and Temporal Distribution of Major Ions and Determining the River’s Hydro-Chemical Type

Authors: Milad Kurdi, Hadi Farhadian, Teymour Eslamkish

Abstract:

This study examined the hydro-geochemistry of Qare-Sou catchment and Gorgan Gulf in order to determine the spatial distribution of major ions. In this regard, six hydrometer stations in the catchment and four stations in Gorgan Gulf were chosen and the samples were collected. Results of spatial and temporal distribution of major ions have shown similar variation trends for calcium, magnesium, and bicarbonate ions. Also, the spatial trend of chloride, sulfate, sodium and potassium ions were same as Electrical Conductivity (EC) and Total Dissolved Solid (TDS). In Nahar Khoran station, the concentrations of ions were more than other stations which may be related to human activities and the role of geology. The Siah Ab station’s ions showed high concentration which is may be related to the station’s close proximity to Gorgan Gulf and the return of water to Qare-Sou River. In order to determine the interaction of water and rock, the Gibbs diagram was used and the results showed that water of the river falls in the rock range and it is affected more by weathering and reaction between water and stone and less by evaporation and crystallization. Assessment of the quality of river water by using graphic methods indicated that the type of water in this area is Ca-HCO3-Mg. Major ions concentration in Qare-Sou in the universal average was more than but not more than the allowed limit by the World Health Organization and China Standard Organization. A comparison of ions concentration in Gorgan Gulf, seas and oceans showed that the pH in Gorgan Gulf was more than the other seas but in Gorgan Gulf the concentration of anion and cation was less than other seas.

Keywords: Hydro-geochemistry, Qare-Sou River, Gorgan Gulf, major ions, Gibbs diagram, water quality, graphical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750