Search results for: feature method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20002

Search results for: feature method

16372 Studies on Separation of Scandium from Sulfate Environment Using Ion Exchange Technique

Authors: H. Hajmohammadi , A. H. Jafari, M. Eskandari Nasab

Abstract:

The ion exchange method was used to assess the absorption of sulfate media from laboratory-grade materials. The Taguchi method was employed for determining the optimum conditions for scandium adsorption. Results show that optimum conditions for scandium adsorption from sulfate were obtained by Purolite C100 cationic resin in 0.1 g/l sulfuric acid and scandium concentration of 2 g/l at 25 °C. Studies also showed that lowering H₂SO₄ concentration and aqueous phase temperature leads to an increase in Sc adsorption. Visual Minteq software was used to ascertain the various possible cation types and the effect of concentration of scandium ion species on scandium adsorption by cationic resins. The simulation results of the above software show that scandium ion species are often cationic species that are consistent with experimental data.

Keywords: scandium, ion exchange resin, simulation, leach copper

Procedia PDF Downloads 142
16371 Spectroscopic Characterization of Indium-Tin Laser Ablated Plasma

Authors: Muhammad Hanif, Muhammad Salik

Abstract:

In the present research work we present the optical emission studies of the Indium (In)-Tin (Sn) plasma produced by the first (1064 nm) harmonic of an Nd: YAG nanosecond pulsed laser. The experimentally observed line profiles of neutral Indium (InI) and Tin (SnI) are used to extract the electron temperature (Te) using the Boltzmann plot method. Whereas, the electron number density (Ne) has been determined from the Stark broadening line profile method. The Te is calculated by varying the distance from the target surface along the line of propagation of plasma plume and also by varying the laser irradiance. Beside we have studied the variation of Ne as a function of laser irradiance as well as its variation with distance from the target surface.

Keywords: indium-tin plasma, laser ablation, optical emission spectroscopy, electron temperature, electron number density

Procedia PDF Downloads 529
16370 Effect of Design Parameters on Porpoising Instability of a High Speed Planing Craft

Authors: Lokeswara Rao P., Naga Venkata Rakesh N., V. Anantha Subramanian

Abstract:

It is important to estimate, predict, and avoid the dynamic instability of high speed planing crafts. It is known that design parameters like relative location of center of gravity with respect to the dynamic lift centre and length to beam ratio of the craft have influence on the tendency to porpoise. This paper analyzes the hydrodynamic performance on the basis of the semi-empirical Savitsky method and also estimates the same by numerical simulations based on Reynolds Averaged Navier Stokes (RANS) equations using a commercial code namely, STAR- CCM+. The paper examines through the same numerical simulation considering dynamic equilibrium, the changing running trim, which results in porpoising. Some interesting results emerge from the study and this leads to early detection of the instability.

Keywords: CFD, planing hull, porpoising, Savitsky method

Procedia PDF Downloads 180
16369 Structural-Geotechnical Effects of the Foundation of a Medium-Height Structure

Authors: Valentina Rodas, Luis Almache

Abstract:

The interaction effects between the existing soil and the substructure of a 5-story building with an underground one were evaluated in such a way that the structural-geotechnical concepts were validated through the method of impedance factors with a program based on the method of the finite elements. The continuous wall-type foundation had a constant thickness and followed inclined and orthogonal directions, while the ground had homogeneous and medium-type characteristics. The soil considered was type C according to the Ecuadorian Construction Standard (NEC) and the corresponding foundation comprised a depth of 4.00 meters and a basement wall thickness of 40 centimeters. This project is part of a mid-rise building in the city of Azogues (Ecuador). The hypotheses raised responded to the objectives in such a way that the model implemented with springs had a variation with respect to the embedded base, obtaining conservative results.

Keywords: interaction, soil, substructure, springs, effects, modeling , embedment

Procedia PDF Downloads 230
16368 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation

Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.

Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation

Procedia PDF Downloads 309
16367 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor

Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh

Abstract:

Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.

Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging

Procedia PDF Downloads 260
16366 Passive Attenuation of Nitrogen Species at Northern Mine Sites

Authors: Patrick Mueller, Alan Martin, Justin Stockwell, Robert Goldblatt

Abstract:

Elevated concentrations of inorganic nitrogen (N) compounds (nitrate, nitrite, and ammonia) are a ubiquitous feature to mine-influenced drainages due to the leaching of blasting residues and use of cyanide in the milling of gold ores. For many mines, the management of N is a focus for environmental protection, therefore understanding the factors controlling the speciation and behavior of N is central to effective decision making. In this paper, the passive attenuation of ammonia and nitrite is described for three northern water bodies (two lakes and a tailings pond) influenced by mining activities. In two of the water bodies, inorganic N compounds originate from explosives residues in mine water and waste rock. The third water body is a decommissioned tailings impoundment, with N compounds largely originating from the breakdown of cyanide compounds used in the processing of gold ores. Empirical observations from water quality monitoring indicate nitrification (the oxidation of ammonia to nitrate) occurs in all three waterbodies, where enrichment of nitrate occurs commensurately with ammonia depletion. The N species conversions in these systems occurred more rapidly than chemical oxidation kinetics permit, indicating that microbial mediated conversion was occurring, despite the cool water temperatures. While nitrification of ammonia and nitrite to nitrate was the primary process, in all three waterbodies nitrite was consistently present at approximately 0.5 to 2.0 % of total N, even following ammonia depletion. The persistence of trace amounts of nitrite under these conditions suggests the co-occurrence denitrification processes in the water column and/or underlying substrates. The implications for N management in mine waters are discussed.

Keywords: explosives, mining, nitrification, water

Procedia PDF Downloads 319
16365 [Keynote Talk]: Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method

Authors: Vijay Kumar Kukreja, Ravneet Kaur

Abstract:

In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.

Keywords: Crank-Nicolson scheme, Lax-Richtmyer theorem, stability, consistency, Peclet number, Greschgorin circle

Procedia PDF Downloads 223
16364 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis

Authors: Yao Cheng, Weihua Zhang

Abstract:

Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.

Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution

Procedia PDF Downloads 374
16363 The Wellness Wheel: A Tool to Reimagine Schooling

Authors: Jennifer F. Moore

Abstract:

The wellness wheel as a tool for school growth and change is currently being piloted by a startup school in Chicago, IL. In this case study, members of the school community engaged in the appreciative inquiry process to plan their organizational development around the wellness wheel. The wellness wheel (comprised of physical, emotional, social, spiritual, environmental, cognitive, and financial wellness) is used as a planning tool by teachers, students, parents, and administrators. Through the appreciative inquiry method of change, the community is reflecting on their individual level of wellness and developing organizational structures to ensure the well being of children and adults. The goal of the case study is to test the appropriateness of the use of appreciative inquiry (as a method) and the wellness wheel (as a tool) for school growth and development. Findings of the case study will be realized by the conference. The research is in process now.

Keywords: education, schools, well being, wellness

Procedia PDF Downloads 178
16362 The Complete Modal Derivatives

Authors: Sebastian Andersen, Peter N. Poulsen

Abstract:

The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.

Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives

Procedia PDF Downloads 237
16361 Stress Variation of Underground Building Structure during Top-Down Construction

Authors: Soo-yeon Seo, Seol-ki Kim, Su-jin Jung

Abstract:

In the construction of a building, it is necessary to minimize construction period and secure enough work space for stacking of materials during the construction especially in city area. In this manner, various top-down construction methods have been developed and widely used in Korea. This paper investigates the stress variation of underground structure of a building constructed by using SPS (Strut as Permanent System) known as a top-down method in Korea through an analytical approach. Various types of earth pressure distribution related to ground condition were considered in the structural analysis of an example structure at each step of the excavation. From the analysis, the most high member force acting on beams was found when the ground type was medium sandy soil and a stress concentration was found in corner area.

Keywords: construction of building, top-down construction method, earth pressure distribution, member force, stress concentration

Procedia PDF Downloads 306
16360 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach

Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana

Abstract:

This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.

Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation

Procedia PDF Downloads 187
16359 Farmers’ Use of Indigenous Knowledge System (IKS) for Selected Arable Crops Production in Ondo State

Authors: A. M. Omoare, E. O. Fakoya

Abstract:

This study sought to determine the use of indigenous knowledge for selected arable crops production in Ondo Sate. A multistage sampling method was used and 112 arable crops farmers were systematically selected. Data were analyzed using both descriptive and inferential statistics. The results showed that majority of the sampled farmers were male (75.90%) About 75% were married with children. Large proportion of them (62.61%) were within the ages of 30-49 years. Most of them have spent about 10 years in farming (58.92%). The highest raw scores of use of indigenous knowledge were found in planting on mound in yam production, use of native medicine and scare-crow method in controlling birds in rice production, timely planting of locally developed resistant varieties in cassava production and soaking of maize seeds in water to determine their viability with raw scores of 313, 310, 305, 303, and 300 respectively, while the lowest raw scores was obtained in use of bell method in controlling birds in rice production with raw scores of 210. The findings established that proverbs (59.8%) and taboos (55.36%) were the most commonly used media in transmitting indigenous knowledge by arable crop farmers. The multiple regression analysis result revealed that age of the farmers and farming experience had a significant relationship with the use of indigenous knowledge of the farmers which gave R2=0.83 for semi-log function form of equation which is the land equation. The policy implication is that indigenous knowledge should provide a basis for designing modern technologies to enhance sustainable agricultural development.

Keywords: Arable Crop Production, extent of use, indigenous knowledge, farming experience

Procedia PDF Downloads 571
16358 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
16357 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 376
16356 Improvement of Soft Clay Soil with Biopolymer

Authors: Majid Bagherinia

Abstract:

Lime and cement are frequently used as binders in the Deep Mixing Method (DMM) to improve soft clay soils. The most significant disadvantages of these materials are carbon dioxide emissions and the consumption of natural resources. In this study, three different biopolymers, guar gum, locust bean gum, and sodium alginate, were investigated for the improvement of soft clay using DMM. In the experimental study, the effects of the additive ratio and curing time on the Unconfined Compressive Strength (UCS) of stabilized specimens were investigated. According to the results, the UCS values of the specimens increased as the additive ratio and curing time increased. The most effective additive was sodium alginate, and the highest strength was obtained after 28 days.

Keywords: deep mixing method, soft clays, ground improvement, biopolymers, unconfined compressive strength

Procedia PDF Downloads 80
16355 Degradation of Neonicotinoid Insecticides (Acetamiprid and Imidacloprid) Using Biochar of Rice Husk and Fruit Peels

Authors: Mateen Abbas, Abdul Muqeet Khan, Sadia Bashir, Muhammad Awais Khalid, Aamir Ghafoor, Zara Hussain, Mashal Shahid

Abstract:

The irrational use of insecticides in everyday life has drawn attention worldwide towards its harmful effects. To mitigate the toxic effects of insecticides to humans, present study was planned on the degradation/detoxification of the neonicotinoid insecticides including imidacloprid and acetamiprid. Biocarbon of fruit peels (Banana & Watermelon) and biochar (activated or non-activated) of rice husk was utilized as adsorbents for degradation of selected pesticides. Both activated and non-activated biochar were prepared for treatment and then applied in different concentrations (0.5 to 2.0 ppm) and dosage (1.0 to 2.5g) to insecticides (Acetamiprid & Imidacloprid) as well as studied at different times (30-120 minutes). Reverse Phase-High Performance Liquid Chromatography (RP-HPLC) coupled with Photodiode array detector was used to quantify the insecticides. Results depicted that activated biochar of rice husk minimized the 73% concentrations of both insecticides however, watermelon activated biocarbon degraded 72% of imidacloprid and 56% of acetamiprid. Results proved the efficiency of the method employed and it was also inferred that high concentration of biocarbon resulted in larger percentage of degradation. The applied method is cheaper, easy and accessible that can be used to minimize the pesticide residues in animal feed. Degradation using biochar proved significant degradation, eco-friendly and economic method to reduce toxicity of insecticides.

Keywords: insecticides, acetamiprid, imidacloprid, biochar, HPLC

Procedia PDF Downloads 153
16354 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling

Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon

Abstract:

This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.

Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester

Procedia PDF Downloads 121
16353 Structural and Optical Characterization of Silica@PbS Core–Shell Nanoparticles

Authors: A. Pourahmad, Sh. Gharipour

Abstract:

The present work describes the preparation and characterization of nanosized SiO2@PbS core-shell particles by using a simple wet chemical route. This method utilizes silica spheres formation followed by successive ionic layer adsorption and reaction method assisted lead sulphide shell layer formation. The final product was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), UV–vis spectroscopic, infrared spectroscopy (IR) and transmission electron microscopy (TEM) experiments. The morphological studies revealed the uniformity in size distribution with core size of 250 nm and shell thickness of 18 nm. The electron microscopic images also indicate the irregular morphology of lead sulphide shell layer. The structural studies indicate the face-centered cubic system of PbS shell with no other trace for impurities in the crystal structure.

Keywords: core-shell, nanostructure, semiconductor, optical property, XRD

Procedia PDF Downloads 299
16352 Theoretical Analysis of Photoassisted Field Emission near the Metal Surface Using Transfer Hamiltonian Method

Authors: Rosangliana Chawngthu, Ramkumar K. Thapa

Abstract:

A model calculation of photoassisted field emission current (PFEC) by using transfer Hamiltonian method will be present here. When the photon energy is incident on the surface of the metals, such that the energy of a photon is usually less than the work function of the metal under investigation. The incident radiation photo excites the electrons to a final state which lies below the vacuum level; the electrons are confined within the metal surface. A strong static electric field is then applied to the surface of the metal which causes the photoexcited electrons to tunnel through the surface potential barrier into the vacuum region and constitutes the considerable current called photoassisted field emission current. The incident radiation is usually a laser beam, causes the transition of electrons from the initial state to the final state and the matrix element for this transition will be written. For the calculation of PFEC, transfer Hamiltonian method is used. The initial state wavefunction is calculated by using Kronig-Penney potential model. The effect of the matrix element will also be studied. An appropriate dielectric model for the surface region of the metal will be used for the evaluation of vector potential. FORTRAN programme is used for the calculation of PFEC. The results will be checked with experimental data and the theoretical results.

Keywords: photoassisted field emission, transfer Hamiltonian, vector potential, wavefunction

Procedia PDF Downloads 226
16351 Nonlinear Adaptive PID Control for a Semi-Batch Reactor Based on an RBF Network

Authors: Magdi. M. Nabi, Ding-Li Yu

Abstract:

Control of a semi-batch polymerization reactor using an adaptive radial basis function (RBF) neural network method is investigated in this paper. A neural network inverse model is used to estimate the valve position of the reactor; this method can identify the controlled system with the RBF neural network identifier. The weights of the adaptive PID controller are timely adjusted based on the identification of the plant and self-learning capability of RBFNN. A PID controller is used in the feedback control to regulate the actual temperature by compensating the neural network inverse model output. Simulation results show that the proposed control has strong adaptability, robustness and satisfactory control performance and the nonlinear system is achieved.

Keywords: Chylla-Haase polymerization reactor, RBF neural networks, feed-forward, feedback control

Procedia PDF Downloads 702
16350 Virtual Team Management in Companies and Organizations

Authors: Asghar Zamani, Mostafa Falahmorad

Abstract:

Virtualization is established to combine and use the unique capabilities of employees to increase productivity and agility to provide services regardless of location. Adapting to fast and continuous change and getting maximum access to human resources are reasons why virtualization is happening. The distance problem is solved by information. Flexibility is the most important feature of virtualization, and information will be the main focus of virtualized companies. In this research, we used the Covid-19 opportunity window to assess the productivity of the companies that had been going through more virtualized management before the Covid-19 in comparison with those that just started planning on developing infrastructures on virtual management after the crises of pandemic occurred. The research process includes financial (profitability and customer satisfaction) and behavioral (organizational culture and reluctance to change) metrics assessment. In addition to financial and CRM KPIs, a questionnaire is devised to assess how manager and employees’ attitude has been changing towards the migration to virtualization. The sample companies and questions are selected by asking from experts in the IT industry of Iran. In this article, the conclusion is that companies open to virtualization based on accurate strategic planning or willing to pay to train their employees for virtualization before the pandemic are more agile in adapting to change and moving forward in recession. The prospective companies in this research, not only could compensate for the short period loss from the first shock of the Covid-19, but they could also foresee new needs of their customer sooner than other competitors, resulting in the need to employ new staff for executing the emerging demands. Findings were aligned with the literature review. Results can be a wake-up call for business owners especially in developing countries to be more resilient toward modern management styles instead of continuing with traditional ones.

Keywords: virtual management, virtual organization, competitive advantage, KPI, profit

Procedia PDF Downloads 83
16349 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling

Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud

Abstract:

Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].

Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy

Procedia PDF Downloads 120
16348 A Study on the Performance of 2-PC-D Classification Model

Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli

Abstract:

There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.

Keywords: classification model, discriminant function, principle component analysis, variable reduction

Procedia PDF Downloads 333
16347 The Effect of Body Positioning on Upper-Limb Arterial Occlusion Pressure and the Reliability of the Method during Blood Flow Restriction Training

Authors: Stefanos Karanasios, Charkleia Koutri, Maria Moutzouri, Sofia A. Xergia, Vasiliki Sakellari, George Gioftsos

Abstract:

The precise calculation of arterial occlusive pressure (AOP) is a critical step to accurately prescribe individualized pressures during blood flow restriction training (BFRT). AOP is usually measured in a supine position before training; however, previous reports suggested a significant influence in lower limb AOP across different body positions. The aim of the study was to investigate the effect of three different body positions on upper limb AOP and the reliability of the method for its standardization in clinical practice. Forty-two healthy participants (Mean age: 28.1, SD: ±7.7) underwent measurements of upper limb AOP in supine, seated, and standing positions by three blinded raters. A cuff with a manual pump and a pocket doppler ultrasound were used. A significantly higher upper limb AOP was found in seated compared with supine position (p < 0.031) and in supine compared with standing position (p < 0.031) by all raters. An excellent intraclass correlation coefficient (0.858- 0.984, p < 0.001) was found in all positions. Upper limb AOP is strongly dependent on body position changes. The appropriate measurement position should be selected to accurately calculate AOP before BFRT. The excellent inter-rater reliability and repeatability of the method suggest reliable and consistent results across repeated measurements.

Keywords: Kaatsu training, blood flow restriction training, arterial occlusion, reliability

Procedia PDF Downloads 213
16346 High-Yield Synthesis of Nanohybrid Shish-Kebab of Polyethylene on Carbon NanoFillers

Authors: Dilip Depan, Austin Simoneaux, William Chirdon, Ahmed Khattab

Abstract:

In this study, we present a novel approach to synthesize polymer nanocomposites with nanohybrid shish-kebab architecture (NHSK). For this low-density and high density polyethylene (PE) was crystallized on various carbon nano-fillers using a novel and convenient method to prepare high-yield NHSK. Polymer crystals grew epitaxially on carbon nano-fillers using a solution crystallization method. The mixture of polymer and carbon fillers in xylene was flocculated and precipitated in ethanol to improve the product yield. Carbon nanofillers of varying diameter were also used as a nucleating template for polymer crystallization. The morphology of the prepared nanocomposites was characterized scanning electron microscopy (SEM), while differential scanning calorimetry (DSC) was used to quantify the amount of crystalline polymer. Interestingly, whatever the diameter of the carbon nanofiller is, the lamellae of PE is always perpendicular to the long axis of nanofiller. Surface area analysis was performed using BET. Our results indicated that carbon nanofillers of varying diameter can be used to effectively nucleate the crystallization of polymer. The effect of molecular weight and concentration of the polymer was discussed on the basis of chain mobility and crystallization capability of the polymer matrix. Our work shows a facile, rapid, yet high-yield production method to form polymer nanocomposites to reveal application potential of NHSK architecture.

Keywords: carbon nanotubes, polyethylene, nanohybrid shish-kebab, crystallization, morphology

Procedia PDF Downloads 329
16345 An Integrated Architecture of E-Learning System to Digitize the Learning Method

Authors: M. Touhidul Islam Sarker, Mohammod Abul Kashem

Abstract:

The purpose of this paper is to improve the e-learning system and digitize the learning method in the educational sector. The learner will login into e-learning platform and easily access the digital content, the content can be downloaded and take an assessment for evaluation. Learner can get access to these digital resources by using tablet, computer, and smart phone also. E-learning system can be defined as teaching and learning with the help of multimedia technologies and the internet by access to digital content. E-learning replacing the traditional education system through information and communication technology-based learning. This paper has designed and implemented integrated e-learning system architecture with University Management System. Moodle (Modular Object-Oriented Dynamic Learning Environment) is the best e-learning system, but the problem of Moodle has no school or university management system. In this research, we have not considered the school’s student because they are out of internet facilities. That’s why we considered the university students because they have the internet access and used technologies. The University Management System has different types of activities such as student registration, account management, teacher information, semester registration, staff information, etc. If we integrated these types of activity or module with Moodle, then we can overcome the problem of Moodle, and it will enhance the e-learning system architecture which makes effective use of technology. This architecture will give the learner to easily access the resources of e-learning platform anytime or anywhere which digitizes the learning method.

Keywords: database, e-learning, LMS, Moodle

Procedia PDF Downloads 188
16344 An Improved Two-dimensional Ordered Statistical Constant False Alarm Detection

Authors: Weihao Wang, Zhulin Zong

Abstract:

Two-dimensional ordered statistical constant false alarm detection is a widely used method for detecting weak target signals in radar signal processing applications. The method is based on analyzing the statistical characteristics of the noise and clutter present in the radar signal and then using this information to set an appropriate detection threshold. In this approach, the reference cell of the unit to be detected is divided into several reference subunits. These subunits are used to estimate the noise level and adjust the detection threshold, with the aim of minimizing the false alarm rate. By using an ordered statistical approach, the method is able to effectively suppress the influence of clutter and noise, resulting in a low false alarm rate. The detection process involves a number of steps, including filtering the input radar signal to remove any noise or clutter, estimating the noise level based on the statistical characteristics of the reference subunits, and finally, setting the detection threshold based on the estimated noise level. One of the main advantages of two-dimensional ordered statistical constant false alarm detection is its ability to detect weak target signals in the presence of strong clutter and noise. This is achieved by carefully analyzing the statistical properties of the signal and using an ordered statistical approach to estimate the noise level and adjust the detection threshold. In conclusion, two-dimensional ordered statistical constant false alarm detection is a powerful technique for detecting weak target signals in radar signal processing applications. By dividing the reference cell into several subunits and using an ordered statistical approach to estimate the noise level and adjust the detection threshold, this method is able to effectively suppress the influence of clutter and noise and maintain a low false alarm rate.

Keywords: two-dimensional, ordered statistical, constant false alarm, detection, weak target signals

Procedia PDF Downloads 78
16343 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 249