Search results for: one side class algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7706

Search results for: one side class algorithm

1916 F-VarNet: Fast Variational Network for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.

Keywords: MRI, deep learning, variational network, computer vision, compress sensing

Procedia PDF Downloads 162
1915 Effectiveness of Active Learning in Social Science Courses at Japanese Universities

Authors: Kumiko Inagaki

Abstract:

In recent, years, Japanese universities have begun to face a dilemma: more than half of all high school graduates go on to attend an institution of higher learning, overwhelming Japanese universities accustomed to small student bodies. These universities have been forced to embrace qualitative changes to accommodate the increased number and diversity of students who enter their establishments, students who differ in their motivations for learning, their levels of eagerness to learn, and their perspectives on the future. One of these changes is an increase in awareness among Japanese educators of the importance of active learning, which deepens students’ understanding of course material through a range of activities, including writing, speaking, thinking, and presenting, in addition to conventional “passive learning” methods such as listening to a one-way lecture.  The purpose of this study is to examine the effectiveness of the teaching method adapted to improve active learning. A teaching method designed to promote active learning was implemented in a social science course at one of the most popular universities in Japan. A questionnaire using a five-point response format was given to students in 2,305 courses throughout the university to evaluate the effectiveness of the method based on the following measures: ① the ratio of students who were motivated to attend the classes, ② the rate at which students learned new information, and ③ the teaching method adopted in the classes. The results of this study show that the percentage of students who attended the active learning course eagerly, and the rate of new knowledge acquired through the course, both exceeded the average for the university, the department, and the subject area of social science. In addition, there are strong correlations between teaching method and student motivation and between teaching method and knowledge acquisition rate. These results indicate that the active learning teaching method was effectively implemented and that it may improve student eagerness to attend class and motivation to learn.

Keywords: active learning, Japanese university, teaching method, university education

Procedia PDF Downloads 195
1914 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading

Authors: Peter Shi

Abstract:

Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.

Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market

Procedia PDF Downloads 72
1913 Designing Information Systems in Education as Prerequisite for Successful Management Results

Authors: Vladimir Simovic, Matija Varga, Tonco Marusic

Abstract:

This research paper shows matrix technology models and examples of information systems in education (in the Republic of Croatia and in the Germany) in support of business, education (when learning and teaching) and e-learning. Here we researched and described the aims and objectives of the main process in education and technology, with main matrix classes of data. In this paper, we have example of matrix technology with detailed description of processes related to specific data classes in the processes of education and an example module that is support for the process: ‘Filling in the directory and the diary of work’ and ‘evaluation’. Also, on the lower level of the processes, we researched and described all activities which take place within the lower process in education. We researched and described the characteristics and functioning of modules: ‘Fill the directory and the diary of work’ and ‘evaluation’. For the analysis of the affinity between the aforementioned processes and/or sub-process we used our application model created in Visual Basic, which was based on the algorithm for analyzing the affinity between the observed processes and/or sub-processes.

Keywords: designing, education management, information systems, matrix technology, process affinity

Procedia PDF Downloads 439
1912 Visual and Chemical Servoing of a Hexapod Robot in a Confined Environment Using Jacobian Estimator

Authors: Guillaume Morin-Duponchelle, Ahmed Nait Chabane, Benoit Zerr, Pierre Schoesetters

Abstract:

Industrial inspection can be achieved through robotic systems, allowing visual and chemical servoing. A popular scheme for visual servo-controlled robotic is the image-based servoing sys-tems. In this paper, an approach of visual and chemical servoing of a hexapod robot using a visual and chemical Jacobian matrix are proposed. The basic idea behind the visual Jacobian matrix is modeling the differential relationship between the camera system and the robotic control system to detect and track accurately points of interest in confined environments. This approach allows the robot to easily detect and navigates to the QR code or seeks a gas source localization using surge cast algorithm. To track the QR code target, a visual servoing based on Jacobian matrix is used. For chemical servoing, three gas sensors are embedded on the hexapod. A Jacobian matrix applied to the gas concentration measurements allows estimating the direction of the main gas source. The effectiveness of the proposed scheme is first demonstrated on simulation. Finally, a hexapod prototype is designed and built and the experimental validation of the approach is presented and discussed.

Keywords: chemical servoing, hexapod robot, Jacobian matrix, visual servoing, navigation

Procedia PDF Downloads 125
1911 Content Based Video Retrieval System Using Principal Object Analysis

Authors: Van Thinh Bui, Anh Tuan Tran, Quoc Viet Ngo, The Bao Pham

Abstract:

Video retrieval is a searching problem on videos or clips based on content in which they are relatively close to an input image or video. The application of this retrieval consists of selecting video in a folder or recognizing a human in security camera. However, some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. In order to overcome all obstacles, we propose a content-based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is performed on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show. The performance is evaluated in promising comparison to the other approaches.

Keywords: video retrieval, principal objects, keyframe, segmentation of aggregating superpixels, speeded up robust features, bag-of-words, SVM

Procedia PDF Downloads 302
1910 Analysing Tertiary Lecturers’ Teaching Practices and Their English Major Students’ Learning Practices with Information and Communication Technology (ICT) Utilization in Promoting Higher-Order Thinking Skills (HOTs)

Authors: Malini Ganapathy, Sarjit Kaur

Abstract:

Maximising learning with higher-order thinking skills with Information and Communications Technology (ICT) has been deep-rooted and emphasised in various developed countries such as the United Kingdom, the United States of America and Singapore. The transformation of the education curriculum in the Malaysia Education Development Plan (PPPM) 2013-2025 focuses on the concept of Higher Order Thinking (HOT) skills which aim to produce knowledgeable students who are critical and creative in their thinking and can compete at the international level. HOT skills encourage students to apply, analyse, evaluate and think creatively in and outside the classroom. In this regard, the National Education Blueprint (2013-2025) is grounded based on high-performing systems which promote a transformation of the Malaysian education system in line with the vision of Malaysia’s National Philosophy in achieving educational outcomes which are of world class status. This study was designed to investigate ESL students’ learning practices on the emphasis of promoting HOTs while using ICT in their curricula. Data were collected using a stratified random sampling where 100 participants were selected to take part in the study. These respondents were a group of undergraduate students who undertook ESL courses in a public university in Malaysia. A three-part questionnaire consisting of demographic information, students’ learning experience and ICT utilization practices was administered in the data collection process. Findings from this study provide several important insights on students’ learning experiences and ICT utilization in developing HOT skills.

Keywords: English as a second language students, critical and creative thinking, learning, information and communication technology and higher order thinking skills

Procedia PDF Downloads 490
1909 A Study of ZY3 Satellite Digital Elevation Model Verification and Refinement with Shuttle Radar Topography Mission

Authors: Bo Wang

Abstract:

As the first high-resolution civil optical satellite, ZY-3 satellite is able to obtain high-resolution multi-view images with three linear array sensors. The images can be used to generate Digital Elevation Models (DEM) through dense matching of stereo images. However, due to the clouds, forest, water and buildings covered on the images, there are some problems in the dense matching results such as outliers and areas failed to be matched (matching holes). This paper introduced an algorithm to verify the accuracy of DEM that generated by ZY-3 satellite with Shuttle Radar Topography Mission (SRTM). Since the accuracy of SRTM (Internal accuracy: 5 m; External accuracy: 15 m) is relatively uniform in the worldwide, it may be used to improve the accuracy of ZY-3 DEM. Based on the analysis of mass DEM and SRTM data, the processing can be divided into two aspects. The registration of ZY-3 DEM and SRTM can be firstly performed using the conjugate line features and area features matched between these two datasets. Then the ZY-3 DEM can be refined by eliminating the matching outliers and filling the matching holes. The matching outliers can be eliminated based on the statistics on Local Vector Binning (LVB). The matching holes can be filled by the elevation interpolated from SRTM. Some works are also conducted for the accuracy statistics of the ZY-3 DEM.

Keywords: ZY-3 satellite imagery, DEM, SRTM, refinement

Procedia PDF Downloads 344
1908 Silent Struggles: Unveiling Linguistic Insights into Poverty in Ancient Egypt

Authors: Hossam Mohammed Abdelfattah

Abstract:

In ancient Egypt, poverty, recognized as the foremost challenge, was extensively addressed in teachings, wisdom, and literary texts. These sources vividly depicted the suffering of a class deprived of life's pleasures. The ancient Egyptian language evolved to introduce terms reflecting poverty and hunger, underscoring the society's commitment to acknowledging and cautioning against this prevalent issue. Among the notable expressions, iwty.f emerged during the Middle Kingdom, symbolizing "the one without property" and signifying the destitute poor. iwty n.f traced back to the Pyramid Texts era, referred to "the one who has nothing" or simply, the poor. Another term, , iwty-sw emphasized the state of possessing nothing. rA-awy originating in the Middle Kingdom Period, initially meant "poverty and poor," expanding to signify poverty in various texts with the addition of the preposition "in," conveying strength given to the poor. During the First Intermediate Period, sny - mnt denoted going through a crisis or suffering, possibly referencing a widespread disease or plague. It encompassed meanings of sickness, pain, and anguish. The term .” sq-sn introduced in Middle Kingdom texts, conveyed the notion of becoming miserable. sp-Xsy . represented a temporal expression reflecting a period of misery or poverty, with Xsy ,indicating distress or misery. The term qsnt appearing in Middle Kingdom texts, held meanings of painful, difficult, harsh, miserable, emaciated, and in bad condition. Its feminine form, qsn denoted anxiety and turmoil. Finally, tp-qsn encapsulated the essence of misery and unhappiness. In essence, these expressions provide linguistic insights into the multifaceted experience of poverty in ancient Egypt, illustrating the society's keen awareness and efforts to address this pervasive challenge.

Keywords: poverty, poor, suffering, misery, painful, ancient Egypt

Procedia PDF Downloads 53
1907 Preference Heterogeneity as a Positive Rather Than Negative Factor towards Acceptable Monitoring Schemes: Co-Management of Artisanal Fishing Communities in Vietnam

Authors: Chi Nguyen Thi Quynh, Steven Schilizzi, Atakelty Hailu, Sayed Iftekhar

Abstract:

Territorial Use Rights for Fisheries (TURFs) have been emerged as a promising tool for fisheries conservation and management. However, illegal fishing has undermined the effectiveness of TURFs, profoundly degrading global fish stocks and marine ecosystems. Conservation and management of fisheries, therefore, largely depends on effectiveness of enforcing fishing regulations, which needs co-enforcement by fishers. However, fishers tend to resist monitoring participation, as their views towards monitoring scheme design has not been received adequate attention. Fishers’ acceptability of a monitoring scheme is likely to be achieved if there is a mechanism allowing fishers to engage in the early planning and design stages. This study carried out a choice experiment with 396 fishers in Vietnam to elicit fishers’ preferences for monitoring scheme and to estimate the relative importance that fishers place on the key design elements. Preference heterogeneity was investigated using a Scale-Adjusted Latent Class Model that accounts for both preference and scale variance. Welfare changes associated with the proposed monitoring schemes were also examined. It is found that there are five distinct preference classes, suggesting that there is no one-size-fits-all scheme well-suited to all fishers. Although fishers prefer to be compensated more for their participation, compensation is not a driving element affecting fishers’ choice. Most fishers place higher value on other elements, such as institutional arrangements and monitoring capacity. Fishers’ preferences are driven by their socio-demographic and psychological characteristics. Understanding of how changes in design elements’ levels affect the participation of fishers could provide policy makers with insights useful for monitoring scheme designs tailored to the needs of different fisher classes.

Keywords: Design of monitoring scheme, Enforcement, Heterogeneity, Illegal Fishing, Territorial Use Rights for Fisheries

Procedia PDF Downloads 324
1906 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 231
1905 Preoperative versus Postoperative Radiation Therapy in Patients with Soft Tissue Sarcoma of the Extremity

Authors: AliAkbar Hafezi, Jalal Taherian, Jamshid Abedi, Mahsa Elahi, Behnam Kadkhodaei

Abstract:

Background: Soft tissue sarcomas (STS) are generally treated with a combination of limb preservation surgery and radiation therapy. Today, preoperative radiation therapy is considered for accurate treatment volume and smaller field size. Therefore, this study was performed to compare preoperative with postoperative radiation therapy in patients with extremity STS. Methods: In this non-randomized clinical trial, patients with localized extremity STS referred to the orthopedic clinics in Iran from 2021 to 2023 were studied. Patients were randomly divided into two groups: preoperative and postoperative radiation therapy. The two groups of patients were compared in terms of acute (wound dehiscence and infection) and late (limb edema, subcutaneous fibrosis, and joint stiffness) complications and their severity, as well as local recurrence and other one-year outcomes. Results: A total of 80 patients with localized extremity STS were evaluated in two treatment groups. The groups were matched in terms of age, sex, history of diabetes mellitus, hypertension, smoking, involved side, involved extremity, lesion location, and tumor histopathology. The acute complications of treatment in the two groups of patients did not differ significantly (P > 0.05). Of the late complications, only joint stiffness between the two groups had significant statistical differences (P < 0.001). The severity of all three late complications in the postoperative radiation therapy group was significantly higher (P < 0.05). There was no significant difference between the two groups in terms of the rate of local recurrence of other one-year outcomes (P > 0.05). Conclusion: This study showed that in patients with localized extremity STS, the two therapeutic approaches of adjuvant and neoadjuvant radiation therapy did not differ significantly in terms of local recurrence and distant metastasis during the one-year follow-up period and due to fewer late complications in preoperative radiotherapy group, this treatment approach can be a better choice than postoperative radiation therapy.

Keywords: soft tissue sarcoma, extremity, preoperative radiation therapy, postoperative radiation therapy

Procedia PDF Downloads 45
1904 Mucoadhesive Chitosan-Coated Nanostructured Lipid Carriers for Oral Delivery of Amphotericin B

Authors: S. L. J. Tan, N. Billa, C. J. Roberts

Abstract:

Oral delivery of amphotericin B (AmpB) potentially eliminates constraints and side effects associated with intravenous administration, but remains challenging due to the physicochemical properties of the drug such that it results in meagre bioavailability (0.3%). In an advanced formulation, 1) nanostructured lipid carriers (NLC) were formulated as they can accommodate higher levels of cargoes and restrict drug expulsion and 2) a mucoadhesion feature was incorporated so as to impart sluggish transit of the NLC along the gastrointestinal tract and hence, maximize uptake and improve bioavailability of AmpB. The AmpB-loaded NLC formulation was successfully formulated via high shear homogenisation and ultrasonication. A chitosan coating was adsorbed onto the formed NLC. Physical properties of the formulations; particle size, zeta potential, encapsulation efficiency (%EE), aggregation states and mucoadhesion as well as the effect of the variable pH on the integrity of the formulations were examined. The particle size of the freshly prepared AmpB-loaded NLC was 163.1 ± 0.7 nm, with a negative surface charge and remained essentially stable over 120 days. Adsorption of chitosan caused a significant increase in particle size to 348.0 ± 12 nm with the zeta potential change towards positivity. Interestingly, the chitosan-coated AmpB-loaded NLC (ChiAmpB NLC) showed significant decrease in particle size upon storage, suggesting 'anti-Ostwald' ripening effect. AmpB-loaded NLC formulation showed %EE of 94.3 ± 0.02 % and incorporation of chitosan increased the %EE significantly, to 99.3 ± 0.15 %. This suggests that the addition of chitosan renders stability to the NLC formulation, interacting with the anionic segment of the NLC and preventing the drug leakage. AmpB in both NLC and ChiAmpB NLC showed polyaggregation which is the non-toxic conformation. The mucoadhesiveness of the ChiAmpB NLC formulation was observed in both acidic pH (pH 5.8) and near-neutral pH (pH 6.8) conditions as opposed to AmpB-loaded NLC formulation. Hence, the incorporation of chitosan into the NLC formulation did not only impart mucoadhesive property but also protected against the expulsion of AmpB which makes it well-primed as a potential oral delivery system for AmpB.

Keywords: Amphotericin B, mucoadhesion, nanostructured lipid carriers, oral delivery

Procedia PDF Downloads 162
1903 A Case Study of Deep Learning for Disease Detection in Crops

Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell

Abstract:

In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2.

Keywords: convolutional neural networks, deep learning, disease detection, precision agriculture

Procedia PDF Downloads 259
1902 Improved Blood Glucose-Insulin Monitoring with Dual-Layer Predictive Control Design

Authors: Vahid Nademi

Abstract:

In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.

Keywords: blood glucose monitoring, insulin pump, predictive control, optimization

Procedia PDF Downloads 136
1901 Electromyography Analysis during Walking and Seated Stepping in the Elderly

Authors: P. Y. Chiang, Y. H. Chen, Y. J. Lin, C. C. Chang, W. C. Hsu

Abstract:

The number of the elderly in the world population and the rate of falls in this increasing numbers of older people are increasing. Decreasing muscle strength and an increasing risk of falling are associated with the ageing process. Because the effects of seated stepping training on the walking performance in the elderly remain unclear, the main purpose of the proposed study is to perform electromyography analysis during walking and seated stepping in the elderly. Four surface EMG electrodes were sticked on the surface of lower limbs muscles, including vastus lateralis (VL), and gastrocnemius (GT) of both sides. Before test, maximal voluntary contraction (MVC) of the respective muscle was obtained using manual muscle testing. The analog raw data of EMG signals were digitized with a sampling frequency of 2000 Hz. The signals were fully rectified and the linear envelope were calculated. Stepping motion cycle was separated into two phases by stepping timing (ST) and pedal return timing (PRT). ST refer to the time when the pedal marker reached the highest height, representing the contra-lateral leg was going to release the pedal. PRT refer to the time when the pedal marker reached the lowest height, representing the contra-lateral leg was going to step the pedal. We assumed that ST acted the same role in initial contact during walking, and PRT for toe-off. The period from ST to next PRT was called pushing phase (PP), during which the leg would start to step with resistance, and we compare this phase with the stance phase in level walking. The period from PRT to next ST was called returning phase (RP), during which leg would not have any resistance in this phase, and we compare this phase with the swing phase in level walking. VL and Gastro muscular activation had similar patterns in both side. The ability may transfer to those needed during loading response, mid-stance and terminal swing phase. User needed to make more effort in stepping compared with walking with similar timing; thus the strengthening of the VL and Gastro may be helpful to improve the walking endurance and efficiency for the elderly.

Keywords: elderly, electromyography, seated stepping, walking

Procedia PDF Downloads 221
1900 Truck Scheduling Problem in a Cross-Dock Centre with Fixed Due Dates

Authors: Mohsen S. Sajadieha, Danyar Molavia

Abstract:

In this paper, a truck scheduling problem is investigated at a two-touch cross-docking center with due dates for outbound trucks as a hard constraint. The objective is to minimize the total cost comprising penalty and delivery cost of delayed shipments. The sequence of unloading shipments is considered and is assumed that shipments are sent to shipping dock doors immediately after unloading and a First-In-First-Out (FIFO) policy is considered for loading the shipments. A mixed integer programming model is developed for the proposed model. Two meta-heuristic algorithms including genetic algorithm (GA) and variable neighborhood search (VNS) are developed to solve the problem in medium and large sized scales. The numerical results show that increase in due dates for outbound trucks has a crucial impact on the reduction of penalty costs of delayed shipments. In addition, by increase the due dates, the improvement in the objective function arises on average in comparison with the situation that the cross-dock is multi-touch and shipments are sent to shipping dock doors only after unloading the whole inbound truck.

Keywords: cross-docking, truck scheduling, fixed due date, door assignment

Procedia PDF Downloads 404
1899 Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System

Authors: Tomyslav Sledevič, Artūras Serackis, Gintautas Tamulevičius, Dalius Navakauskas

Abstract:

This paper presents a comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in the speaker-dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signals to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients give best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose, the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfies the real-time requirements and is suitable for applications in embedded systems.

Keywords: isolated word recognition, features extraction, MFCC, LFCC, LPCC, LPC, FPGA, DTW

Procedia PDF Downloads 496
1898 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model 1: Description

Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu

Abstract:

Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies.

Keywords: runoff, roughness coefficient, PAR, WRM model

Procedia PDF Downloads 378
1897 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method

Authors: M. M. Qasaymeh, M. A. Khodeir

Abstract:

Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.

Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT

Procedia PDF Downloads 410
1896 Quantifying the Impact of Intermittent Signal Priority given to BRT on Ridership and Climate-A Case Study of Ahmadabad

Authors: Smita Chaudhary

Abstract:

Traffic in India are observed uncontrolled, and are characterized by chaotic (not follows the lane discipline) traffic situation. Bus Rapid Transit (BRT) has emerged as a viable option to enhance transportation capacity and provide increased levels of mobility and accessibility. At present in Ahmadabad there are as many intersections which face the congestion and delay at signalized intersection due to transit (BRT) lanes. Most of the intersection in spite of being signalized is operated manually due to the conflict between BRT buses and heterogeneous traffic. Though BRTS in Ahmadabad has an exclusive lane of its own but with this comes certain limitations which Ahmadabad is facing right now. At many intersections in Ahmadabad due to these conflicts, interference, and congestion both heterogeneous traffic as well as transit buses suffer traffic delays of remarkable 3-4 minutes at each intersection which has a become an issue of great concern. There is no provision of BRT bus priority due to which existing signals have their least role to play in managing the traffic that ultimately call for manual operation. There is an immense decrement in the daily ridership of BRTS because people are finding this transit mode no more time saving in their routine, there is an immense fall in ridership ultimately leading to increased number of private vehicles, idling of vehicles at intersection cause air and noise pollution. In order to bring back these commuters’ transit facilities need to be improvised. Classified volume count survey, travel time delay survey was conducted and revised signal design was done for whole study stretch having three intersections and one roundabout, later one intersection was simulated in order to see the effect of giving priority to BRT on side street queue length and travel time for heterogeneous traffic. This paper aims at suggesting the recommendations in signal cycle, introduction of intermittent priority for transit buses, simulation of intersection in study stretch with proposed signal cycle using VISSIM in order to make this transit amenity feasible and attracting for commuters in Ahmadabad.

Keywords: BRT, priority, Ridership, Signal, VISSIM

Procedia PDF Downloads 441
1895 Using Problem-Based Learning on Teaching Early Intervention for College Students

Authors: Chen-Ya Juan

Abstract:

In recent years, the increasing number of children with special needs has brought a lot of attention by many scholars and experts in education, which enforced the preschool teachers face the harsh challenge in the classroom. To protect the right of equal education for all children, enhance the quality of children learning, and take care of the needs of children with special needs, the special education paraprofessional becomes one of the future employment trends for students of the department of the early childhood care and education. Problem-based learning is a problem-oriented instruction, which is different from traditional instruction. The instructor first designed an ambiguous problem direction, following the basic knowledge of early intervention, students had to find clues to solve the problem defined by themselves. In the class, the total instruction included 20 hours, two hours per week. The primary purpose of this paper is to investigate the relationship of student academic scores, self-awareness, learning motivation, learning attitudes, and early intervention knowledge. A total of 105 college students participated in this study and 97 questionnaires were effective. The effective response rate was 90%. The student participants included 95 females and two males. The average age of the participants was 19 years old. The questionnaires included 125 questions divided into four major dimensions: (1) Self-awareness, (2) learning motivation, (3) learning attitudes, and (4) early intervention knowledge. The results indicated (1) the scores of self-awareness were 58%; the scores of the learning motivations was 64.9%; the scores of the learning attitudes was 55.3%. (2) After the instruction, the early intervention knowledge has been increased to 64.2% from 38.4%. (3) Student’s academic performance has positive relationship with self-awareness (p < 0.05; R = 0.506), learning motivation (p < 0.05; R = 0.487), learning attitudes (p < 0.05; R = 0.527). The results implied that although students had gained early intervention knowledge by using PBL instruction, students had medium scores on self-awareness and learning attitudes, medium high in learning motivations.

Keywords: college students, children with special needs, problem-based learning, learning motivation

Procedia PDF Downloads 157
1894 Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall

Authors: Sanjib Kr Pal, S. Bhattacharyya

Abstract:

Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 ≤ Ri ≤ 5), nanoparticle volume concentration (0.0 ≤ ϕ ≤ 0.2), amplitude (0.0 ≤ α ≤ 0.1) of the wavy thick- bottom wall and the wave number (ω) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.

Keywords: conjugate heat transfer, mixed convection, nano fluid, wall waviness

Procedia PDF Downloads 254
1893 Pegylated Liposomes of Trans Resveratrol, an Anticancer Agent, for Enhancing Therapeutic Efficacy and Long Circulation

Authors: M. R. Vijayakumar, Sanjay Kumar Singh, Lakshmi, Hithesh Dewangan, Sanjay Singh

Abstract:

Trans resveratrol (RES) is a natural molecule proved for cancer preventive and therapeutic activities devoid of any potential side effects. However, the therapeutic application of RES in disease management is limited because of its rapid elimination from blood circulation thereby low biological half life in mammals. Therefore, the main objective of this study is to enhance the circulation as well as therapeutic efficacy using PEGylated liposomes. D-α-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS) is applied as steric surface decorating agent to prepare RES liposomes by thin film hydration method. The prepared nanoparticles were evaluated by various state of the art techniques such as dynamic light scattering (DLS) technique for particle size and zeta potential, TEM for shape, differential scanning calorimetry (DSC) for interaction analysis and XRD for crystalline changes of drug. Encapsulation efficiency and invitro drug release were determined by dialysis bag method. Cancer cell viability studies were performed by MTT assay, respectively. Pharmacokinetic studies were performed in sprague dawley rats. The prepared liposomes were found to be spherical in shape. Particle size and zeta potential of prepared formulations varied from 64.5±3.16 to 262.3±7.45 nm and -2.1 to 1.76 mV, respectively. DSC study revealed absence of potential interaction. XRD study revealed presence of amorphous form in liposomes. Entrapment efficiency was found to be 87.45±2.14 % and the drug release was found to be controlled up to 24 hours. Minimized MEC in MTT assay and tremendous enhancement in circulation time of RES PEGylated liposomes than its pristine form revealed that the stearic stabilized PEGylated liposomes can be an alternative tool to commercialize this molecule for chemopreventive and therapeutic applications in cancer.

Keywords: trans resveratrol, cancer nanotechnology, long circulating liposomes, bioavailability enhancement, liposomes for cancer therapy, PEGylated liposomes

Procedia PDF Downloads 589
1892 Studying the Effect of Different Sizes of Carbon Fiber on Locally Developed Copper Based Composites

Authors: Tahir Ahmad, Abubaker Khan, Muhammad Kamran, Muhammad Umer Manzoor, Muhammad Taqi Zahid Butt

Abstract:

Metal Matrix Composites (MMC) is a class of weight efficient structural materials that are becoming popular in engineering applications especially in electronic, aerospace, aircraft, packaging and various other industries. This study focuses on the development of carbon fiber reinforced copper matrix composite. Keeping in view the vast applications of metal matrix composites,this specific material is produced for its unique mechanical and thermal properties i.e. high thermal conductivity and low coefficient of thermal expansion at elevated temperatures. The carbon fibers were not pretreated but coated with copper by electroless plating in order to increase the wettability of carbon fiber with the copper matrix. Casting is chosen as the manufacturing route for the C-Cu composite. Four different compositions of the composite were developed by varying the amount of carbon fibers by 0.5, 1, 1.5 and 2 wt. % of the copper. The effect of varying carbon fiber content and sizes on the mechanical properties of the C-Cu composite is studied in this work. The tensile test was performed on the tensile specimens. The yield strength decreases with increasing fiber content while the ultimate tensile strength increases with increasing fiber content. Rockwell hardness test was also performed and the result followed the increasing trend for increasing carbon fibers and the hardness numbers are 30.2, 37.2, 39.9 and 42.5 for sample 1, 2, 3 and 4 respectively. The microstructures of the specimens were also examined under the optical microscope. Wear test and SEM also done for checking characteristic of C-Cu marix composite. Through casting may be a route for the production of the C-Cu matrix composite but still powder metallurgy is better to follow as the wettability of carbon fiber with matrix, in that case, would be better.

Keywords: copper based composites, mechanical properties, wear properties, microstructure

Procedia PDF Downloads 364
1891 Luminescent Functionalized Graphene Oxide Based Sensitive Detection of Deadly Explosive TNP

Authors: Diptiman Dinda, Shyamal Kumar Saha

Abstract:

In the 21st century, sensitive and selective detection of trace amounts of explosives has become a serious problem. Generally, nitro compound and its derivatives are being used worldwide to prepare different explosives. Recently, TNP (2, 4, 6 trinitrophenol) is the most commonly used constituent to prepare powerful explosives all over the world. It is even powerful than TNT or RDX. As explosives are electron deficient in nature, it is very difficult to detect one separately from a mixture. Again, due to its tremendous water solubility, detection of TNP in presence of other explosives from water is very challenging. Simple instrumentation, cost-effective, fast and high sensitivity make fluorescence based optical sensing a grand success compared to other techniques. Graphene oxide (GO), with large no of epoxy grps, incorporate localized nonradiative electron-hole centres on its surface to give very weak fluorescence. In this work, GO is functionalized with 2, 6-diamino pyridine to remove those epoxy grps. through SN2 reaction. This makes GO into a bright blue luminescent fluorophore (DAP/rGO) which shows an intense PL spectrum at ∼384 nm when excited at 309 nm wavelength. We have also characterized the material by FTIR, XPS, UV, XRD and Raman measurements. Using this as fluorophore, a large fluorescence quenching (96%) is observed after addition of only 200 µL of 1 mM TNP in water solution. Other nitro explosives give very moderate PL quenching compared to TNP. Such high selectivity is related to the operation of FRET mechanism from fluorophore to TNP during this PL quenching experiment. TCSPC measurement also reveals that the lifetime of DAP/rGO drastically decreases from 3.7 to 1.9 ns after addition of TNP. Our material is also quite sensitive to 125 ppb level of TNP. Finally, we believe that this graphene based luminescent material will emerge a new class of sensing materials to detect trace amounts of explosives from aqueous solution.

Keywords: graphene, functionalization, fluorescence quenching, FRET, nitroexplosive detection

Procedia PDF Downloads 440
1890 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 188
1889 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network

Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour

Abstract:

Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.

Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network

Procedia PDF Downloads 169
1888 Tibyan Automated Arabic Correction Using Machine-Learning in Detecting Syntactical Mistakes

Authors: Ashwag O. Maghraby, Nida N. Khan, Hosnia A. Ahmed, Ghufran N. Brohi, Hind F. Assouli, Jawaher S. Melibari

Abstract:

The Arabic language is one of the most important languages. Learning it is so important for many people around the world because of its religious and economic importance and the real challenge lies in practicing it without grammatical or syntactical mistakes. This research focused on detecting and correcting the syntactic mistakes of Arabic syntax according to their position in the sentence and focused on two of the main syntactical rules in Arabic: Dual and Plural. It analyzes each sentence in the text, using Stanford CoreNLP morphological analyzer and machine-learning approach in order to detect the syntactical mistakes and then correct it. A prototype of the proposed system was implemented and evaluated. It uses support vector machine (SVM) algorithm to detect Arabic grammatical errors and correct them using the rule-based approach. The prototype system has a far accuracy 81%. In general, it shows a set of useful grammatical suggestions that the user may forget about while writing due to lack of familiarity with grammar or as a result of the speed of writing such as alerting the user when using a plural term to indicate one person.

Keywords: Arabic language acquisition and learning, natural language processing, morphological analyzer, part-of-speech

Procedia PDF Downloads 153
1887 Inhalable Lipid-Coated-Chitosan Nano-Embedded Microdroplets of an Antifungal Drug for Deep Lung Delivery

Authors: Ranjot Kaur, Om P. Katare, Anupama Sharma, Sarah R. Dennison, Kamalinder K. Singh, Bhupinder Singh

Abstract:

Respiratory microbial infections being among the top leading cause of death worldwide are difficult to treat as the microbes reside deep inside the airways, where only a small fraction of drug can access after traditional oral or parenteral routes. As a result, high doses of drugs are required to maintain drug levels above minimum inhibitory concentrations (MIC) at the infection site, unfortunately leading to severe systemic side-effects. Therefore, delivering antimicrobials directly to the respiratory tract provides an attractive way out in such situations. In this context, current study embarks on the systematic development of lung lia pid-modified chitosan nanoparticles for inhalation of voriconazole. Following the principles of quality by design, the chitosan nanoparticles were prepared by ionic gelation method and further coated with major lung lipid by precipitation method. The factor screening studies were performed by fractional factorial design, followed by optimization of the nanoparticles by Box-Behnken Design. The optimized formulation has a particle size range of 170-180nm, PDI 0.3-0.4, zeta potential 14-17, entrapment efficiency 45-50% and drug loading of 3-5%. The presence of a lipid coating was confirmed by FESEM, FTIR, and X-RD. Furthermore, the nanoparticles were found to be safe upto 40µg/ml on A549 and Calu-3 cell lines. The quantitative and qualitative uptake studies also revealed the uptake of nanoparticles in lung epithelial cells. Moreover, the data from Spraytec and next-generation impactor studies confirmed the deposition of nanoparticles in lower airways. Also, the interaction of nanoparticles with DPPC monolayers signifies its biocompatibility with lungs. Overall, the study describes the methodology and potential of lipid-coated chitosan nanoparticles in futuristic inhalation nanomedicine for the management of pulmonary aspergillosis.

Keywords: dipalmitoylphosphatidylcholine, nebulization, DPPC monolayers, quality-by-design

Procedia PDF Downloads 143