Search results for: match filter (MF)
270 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator
Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li
Abstract:
A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator
Procedia PDF Downloads 154269 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis
Authors: Syed Asif Hassan, Syed Atif Hassan
Abstract:
Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction
Procedia PDF Downloads 391268 Engaging Medical Students in Research through Student Research Mentorship Programme
Authors: Qi En Han, Si En Wai, Eugene Quek
Abstract:
As one of the two Academic Medical Centres (AMCs) in Singapore, SingHealth Duke-NUS AMC strives to improve patients’ lives through excellent clinical care, research and education. These efforts are enhanced with the establishment of Academic Clinical Programmes (ACPs). Each ACP brings together specialists in a particular discipline from different institutions to maximize the power of shared knowledge and resources. Initiated by Surgery ACP, the student research mentorship programme is a programme designed to facilitate engagement between medical students and the surgical faculty. The programme offers mentors not only the opportunity to supervise research but also to nurture future clinician scientists. In turn, medical students acquire valuable research experience which may be useful in their future careers. The programme typically lasts one year, depending on the students’ commitment. Surgery ACP matches students’ research interests with the mentor's area of expertise whenever possible. Surgery ACP organizes informal tea sessions to bring students and prospective mentors together. Once a match is made, the pair is required to submit a project proposal which includes the title, proposed start and end dates, ethical and biosafety considerations and project details. The mentees either think of their own research question with guidance from the mentors or join an existing project. The mentees may participate in data collection, data analysis, manuscript writing and conference presentation. The progress of each research project is monitored through half-yearly progress report. The mentees report problems encountered or changes made to existing proposal on top of the progress made. A total of 18 mentors were successfully paired with 36 mentees since 2013. Currently, there are 23 on-going and 13 completed projects. The mentees are encouraged to present their projects at conferences and to publish in peer-reviewed journals. Six mentees have presented their completed projects at local or international conferences and one mentee has her work published. To further support student research, Surgery ACP organized a Research Day in 2015 to recognize their research efforts and to showcase their wide-range of research. Surgery ACP recognizes that early exposure of medical students to research is important in developing them into clinician scientists. As interest in research take time to develop and are usually realized during various research attachments, it is crucial that programmes such as the student research mentorship programme exist. Surgery ACP will continue to build on this programme.Keywords: academic clinical programme, clinician scientist, medical student, mentoring
Procedia PDF Downloads 218267 Track and Evaluate Cortical Responses Evoked by Electrical Stimulation
Authors: Kyosuke Kamada, Christoph Kapeller, Michael Jordan, Mostafa Mohammadpour, Christy Li, Christoph Guger
Abstract:
Cortico-cortical evoked potentials (CCEP) refer to responses generated by cortical electrical stimulation at distant brain sites. These responses provide insights into the functional networks associated with language or motor functions, and in the context of epilepsy, they can reveal pathological networks. Locating the origin and spread of seizures within the cortex is crucial for pre-surgical planning. This process can be enhanced by employing cortical stimulation at the seizure onset zone (SOZ), leading to the generation of CCEPs in remote brain regions that may be targeted for disconnection. In the case of a 24-year-old male patient suffering from intractable epilepsy, corpus callosotomy was performed as part of the treatment. DTI-MRI imaging, conducted using a 3T MRI scanner for fiber tracking, along with CCEP, is used as part of an assessment for surgical planning. Stimulation of the SOZ, with alternating monophasic pulses of 300µs duration and 15mA current intensity, resulted in CCEPs on the contralateral frontal cortex, reaching a peak amplitude of 206µV with a latency of 31ms, specifically in the left pars triangularis. The related fiber tracts were identified with a two-tensor unscented Kalman filter (UKF) technique, showing transversal fibers through the corpus callosum. The CCEPs were monitored through the progress of the surgery. Notably, the SOZ-associated CCEPs exhibited a reduction following the resection of the anterior portion of the corpus callosum, reaching the identified connecting fibers. This intervention demonstrated a potential strategy for mitigating the impact of intractable epilepsy through targeted disconnection of identified cortical regions.Keywords: CCEP, SOZ, Corpus callosotomy, DTI
Procedia PDF Downloads 66266 From News Breakers to News Followers: The Influence of Facebook on the Coverage of the January 2010 Crisis in Jos
Authors: T. Obateru, Samuel Olaniran
Abstract:
In an era when the new media is affording easy access to packaging and dissemination of information, the social media have become a popular avenue for sharing information for good or ill. It is evident that the traditional role of journalists as ‘news breakers’ is fast being eroded. People now share information on happenings via the social media like Facebook, Twitter and the rest, such that journalists themselves now get leads on happenings from such sources. Beyond the access to information provided by the new media is the erosion of the gatekeeping role of journalists who by their training and calling, are supposed to handle information with responsibility. Thus, sensitive information that journalists would normally filter is randomly shared by social media activists. This was the experience of journalists in Jos, Plateau State in January 2010 when another of the recurring ethnoreligious crisis that engulfed the state resulted in another widespread killing, vandalism, looting, and displacements. Considered as one of the high points of crises in the state, journalists who had the duty of covering the crisis also relied on some of these sources to get their bearing on the violence. This paper examined the role of Facebook in the work of journalists who covered the 2010 crisis. Taking the gatekeeping perspective, it interrogated the extent to which Facebook impacted their professional duty positively or negatively vis-à-vis the peace journalism model. It employed survey to elicit information from 50 journalists who covered the crisis using questionnaire as instrument. The paper revealed that the dissemination of hate information via mobile phones and social media, especially Facebook, aggravated the crisis situation. Journalists became news followers rather than news breakers because a lot of them were put on their toes by information (many of which were inaccurate or false) circulated on Facebook. It recommended that journalists must remain true to their calling by upholding their ‘gatekeeping’ role of disseminating only accurate and responsible information if they would remain the main source of credible information on which their audience rely.Keywords: crisis, ethnoreligious, Facebook, journalists
Procedia PDF Downloads 294265 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu
Authors: Ammarah Irum, Muhammad Ali Tahir
Abstract:
Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language
Procedia PDF Downloads 72264 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow
Authors: Alex Fedoseyev
Abstract:
This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow
Procedia PDF Downloads 61263 Impact of Soot on NH3-SCR, NH3 Oxidation and NH3 TPD over Cu/SSZ-13 Zeolite
Authors: Lidija Trandafilovic, Kirsten Leistner, Marie Stenfeldt, Louise Olsson
Abstract:
Ammonia Selective Catalytic Reduction (NH3 SCR), is one of the most efficient post combustion abatement technologies for removing NOx from diesel engines. In order to remove soot, diesel particulate filters (DPF) are used. Recently, SCR coated filters have been introduced, which captures soot and simultaneously is active for ammonia SCR. There are large advantages with using SCR coated filters, such as decreased volume and also better light off characteristics, since both the SCR function as well as filter function is close to the engine. The objective of this work was to examine the effect of soot, produced using an engine bench, on Cu/SSZ-13 catalysts. The impact of soot on Cu/SSZ-13 in standard SCR, NH3 oxidation, NH3 temperature programmed desorption (TPD), as well as soot oxidation (with and without water) was examined using flow reactor measurements. In all experiments, prior to the soot loading, the fresh activity of Cu/SSZ-13 was recorded with stepwise increasing the temperature from 100°C till 600°C. Thereafter, the sample was loaded with soot and the experiment was repeated in the temperature range from 100°C till 700°C. The amount of CO and CO2 produced in each experiment is used to calculate the soot oxidized at each steady state temperature. The soot oxidized during the heating to next temperature step is included, e.g. the CO+CO2 produced when increasing the temperature to 600°C is added to the 600°C step. The influence of the two factors seem to be of the most importance to soot oxidation: ammonia and water. The influence of water on soot oxidation shift the maximum of CO2 and CO production towards lower temperatures, thus water increases the soot oxidation. Moreover, when adding ammonia to the system it is clear that the soot oxidation is lowered in the presence of ammonia, resulting in larger integrated COx at 500°C for O2+H2O, while opposite results at 600 °C was received where more was oxidised for O2+H2O+NH3 case. To conclude the presence of ammonia reduces the soot oxidation, which is in line with the ammonia TPD results where we found ammonia storage on the soot. Interestingly, during ammonia SCR conditions the activity for soot oxidation is regained at 500°C. At this high temperature the SCR zone is very short, thus the majority of the catalyst is not exposed to ammonia and therefore the inhibition effect of ammonia is not observed.Keywords: NH3-SCR, Cu/SSZ-13, soot, zeolite
Procedia PDF Downloads 236262 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation
Authors: Othman Maklouf, Abdunnaser Tresh
Abstract:
Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.Keywords: GPS, IMU, Kalman filter, materials engineering
Procedia PDF Downloads 420261 Validation of the Recovery of House Dust Mites from Fabrics by Means of Vacuum Sampling
Authors: A. Aljohani, D. Burke, D. Clarke, M. Gormally, M. Byrne, G. Fleming
Abstract:
Introduction: House Dust Mites (HDMs) are a source of allergen particles embedded in textiles and furnishings. Vacuum sampling is commonly used to recover and determine the abundance of HDMs but the efficiency of this method is less than standardized. Here, the efficiency of recovery of HDMs was evaluated from home-associated textiles using vacuum sampling protocols.Methods/Approach: Living Mites (LMs) or dead Mites (DMs) House Dust Mites (Dermatophagoides pteronyssinus: FERA, UK) were separately seeded onto the surfaces of Smooth Cotton, Denim and Fleece (25 mites/10x10cm2 squares) and left for 10 minutes before vacuuming. Fabrics were vacuumed (SKC Flite 2 pump) at a flow rate of 14 L/min for 60, 90 or 120 seconds and the number of mites retained by the filter (0.4μm x 37mm) unit was determined. Vacuuming was carried out in a linear direction (Protocol 1) or in a multidirectional pattern (Protocol 2). Additional fabrics with LMs were also frozen and then thawed, thereby euthanizing live mites (now termed EMs). Results/Findings: While there was significantly greater (p=0.000) recovery of mites (76% greater) in fabrics seeded with DMs than LMs irrespective of vacuuming protocol or fabric type, the efficiency of recovery of DMs (72%-76%) did not vary significantly between fabrics. For fabrics containing EMs, recovery was greatest for Smooth Cotton and Denim (65-73% recovered) and least for Fleece (15% recovered). There was no significant difference (p=0.99) between the recovery of mites across all three mite categories from Smooth Cotton and Denim but significantly fewer (p=0.000) mites were recovered from Fleece. Scanning Electron Microscopy images of HMD-seeded fabrics showed that live mites burrowed deeply into the Fleece weave which reduced their efficiency of recovery by vacuuming. Research Implications: Results presented here have implications for the recovery of HDMs by vacuuming and the choice of fabric to ameliorate HDM-dust sensitization.Keywords: allergy, asthma, dead, fabric, fleece, live mites, sampling
Procedia PDF Downloads 139260 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 83259 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 143258 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 134257 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 535256 Single Tuned Shunt Passive Filter Based Current Harmonic Elimination of Three Phase AC-DC Converters
Authors: Mansoor Soomro
Abstract:
The evolution of power electronic equipment has been pivotal in making industrial processes productive, efficient and safe. Despite its attractive features, it has been due to nonlinear loads which make it vulnerable to power quality conditions. Harmonics is one of the power quality problem in which the harmonic frequency is integral multiple of supply frequency. Therefore, the supply voltage and supply frequency do not last within their tolerable limits. As a result, distorted current and voltage waveform may appear. Attributes of low power quality confirm that an electrical device or equipment is likely to malfunction, fail promptly or unable to operate under all applied conditions. The electrical power system is designed for delivering power reliably, namely maximizing power availability to customers. However, power quality events are largely untracked, and as a result, can take out a process as many as 20 to 30 times a year, costing utilities, customers and suppliers of load equipment, a loss of millions of dollars. The ill effects of current harmonics reduce system efficiency, cause overheating of connected equipment, result increase in electrical power and air conditioning costs. With the passage of time and the rapid growth of power electronic converters has highlighted the damages of current harmonics in the electrical power system. Therefore, it has become essential to address the bad influence of current harmonics while planning any suitable changes in the electrical installations. In this paper, an effort has been made to mitigate the effects of dominant 3rd order current harmonics. Passive filtering technique with six pulse multiplication converter has been employed to mitigate them. Since, the standards of power quality are to maintain the supply voltage and supply current within certain prescribed standard limits. For this purpose, the obtained results are validated as per specifications of IEEE 519-1992 and IEEE 519-2014 performance standards.Keywords: current harmonics, power quality, passive filters, power electronic converters
Procedia PDF Downloads 301255 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 264254 Spherical Organic Particle (SOP) Emissions from Fixed-Bed Residential Coal-Burning Devices
Authors: Tafadzwa Makonese, Harold Annegarn, Patricia Forbes
Abstract:
Residential coal combustion is one of the largest sources of carbonaceous aerosols in the Highveld region of South Africa, significantly affecting the local and regional climate. In this study, we investigated single coal burning particles emitted when using different fire-ignition techniques (top-lit up-draft vs bottom-lit up-draft) and air ventilation rates (defined by the number of air holes above and below the fire grate) in selected informal braziers. Aerosol samples were collected on nucleopore filters at the SeTAR Centre Laboratory, University of Johannesburg. Individual particles (~700) were investigated using a scanning electron microscope equipped with an energy-dispersive X-ray spectroscopy (EDS). Two distinct forms of spherical organic particles (SOPs) were identified, one less oxidized than the other. The particles were further classified into "electronically" dark and bright, according to China et al. [2014]. EDS analysis showed that 70% of the dark spherical organic particles balls had higher (~60%) relative oxygen content than in the bright SOPs. We quantify the morphology of spherical organic particles and classify them into four categories: ~50% are bare single particles; ~35% particles are aggregated and form diffusion accretion chains; 10% have inclusions; and 5% are deformed due to impaction on filter material during sampling. We conclude that there are two distinct kinds of coal burning spherical organic particles and that dark SOPs are less volatile than bright SOPs. We also show that these spherical organic particles are similar in nature and characteristics to tar balls observed in biomass combustion, and that they have the potential to absorb sunlight thereby affecting the earth’s radiative budget and climate. This study provides insights on the mixing states, morphology, and possible formation mechanisms of these organic particles from residential coal combustion in informal stoves.Keywords: spherical organic particles, residential coal combustion, fixed-bed, aerosols, morphology, stoves
Procedia PDF Downloads 466253 Control Algorithm Design of Single-Phase Inverter For ZnO Breakdown Characteristics Tests
Authors: Kashif Habib, Zeeshan Ayyub
Abstract:
ZnO voltage dependent resistor was widely used as components of the electrical system for over-voltage protection. It has a wide application prospect in superconducting energy-removal, generator de-excitation, overvoltage protection of electrical & electronics equipment. At present, the research for the application of ZnO voltage dependent resistor stop, it uses just in the field of its nonlinear voltage current characteristic and overvoltage protection areas. There is no further study over the over-voltage breakdown characteristics, such as the combustion phenomena and the measure of the voltage/current when it breakdown, and the affect to its surrounding equipment. It is also a blind spot in its application. So, when we do the feature test of ZnO voltage dependent resistor, we need to design a reasonable test power supply, making the terminal voltage keep for sine wave, simulating the real use of PF voltage in power supply conditions. We put forward the solutions of using inverter to generate a controllable power. The paper mainly focuses on the breakdown characteristic test power supply of nonlinear ZnO voltage dependent resistor. According to the current mature switching power supply technology, we proposed power control system using the inverter as the core. The power mainly realize the sin-voltage output on the condition of three-phase PF-AC input, and 3 control modes (RMS, Peak, Average) of the current output. We choose TMS320F2812M as the control part of the hardware platform. It is used to convert the power from three-phase to a controlled single-phase sin-voltage through a rectifier, filter, and inverter. Design controller produce SPWM, to get the controlled voltage source via appropriate multi-loop control strategy, while execute data acquisition and display, system protection, start logic control, etc. The TMS320F2812M is able to complete the multi-loop control quickly and can be a good completion of the inverter output control.Keywords: ZnO, multi-loop control, SPWM, non-linear load
Procedia PDF Downloads 325252 Leadership in the Emergence Paradigm: A Literature Review on the Medusa Principles
Authors: Everard van Kemenade
Abstract:
Many quality improvement activities are planned. Leaders are strongly involved in missions, visions and strategic planning. They use, consciously or unconsciously, the PDCA-cycle, also know as the Deming cycle. After the planning, the plans are carried out and the results or effects are measured. If the results show that the goals in the plan have not been achieved, adjustments are made in the next plan or in the execution of the processes. Then, the cycle is run through again. Traditionally, the PDCA-cycle is advocated as a means to an end. However, PDCA is especially fit for planned, ordered, certain contexts. It fits with the empirical and referential quality paradigm. For uncertain, unordered, unplanned processes, something else might be needed instead of Plan-Do-Check-Act. Due to the complexity of our society, the influence of the context, and the uncertainty in our world nowadays, not every activity can be planned anymore. At the same time organisations need to be more innovative than ever. That provides leaders with ‘wicked tendencies’. However, that raises the question how one can innovate without being able to plan? Complexity science studies the interactions of a diverse group of agents that bring about change in times of uncertainty, e.g. when radical innovation is co-created. This process is called emergence. This research study explores the role of leadership in the emergence paradigm. Aim of the article is to study the way that leadership can support the emergence of innovation in a complex context. First, clarity is given on the concepts used in the research question: complexity, emergence, innovation and leadership. Thereafter a literature search is conducted to answer the research question. The topics ‘emergent leadership’ or ‘complexity leadership’ are chosen for an exploratory search in Google and Google Scholar using the berry picking method. Exclusion criterion is emergence in other disciplines than organizational development or in the meaning of ‘arising’. The literature search conducted gave 45 hits. Twenty-seven articles were excluded after reading the title and abstract because they did not research the topic of emergent leadership and complexity. After reading the remaining articles as a whole one more was excluded because the article used emergent in the limited meaning of ‗arising‘ and eight more were excluded because the topic did not match the research question of this article. That brings the total of the search to 17 articles. The useful conclusions from the articles are merged and grouped together under overarching topics, using thematic analysis. The findings are that 5 topics prevail when looking at possibilities for leadership to facilitate innovation: enabling, sharing values, dreaming, interacting, context sensitivity and adaptivity. Together they form In Dutch the acronym Medusa.Keywords: complexity science, emergence, leadership in the emergence paradigm, innovation, the Medusa principles
Procedia PDF Downloads 28251 Progressive Damage Analysis of Mechanically Connected Composites
Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan
Abstract:
While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values , and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.Keywords: puck, finite element, bolted joint, composite
Procedia PDF Downloads 102250 Affects Associations Analysis in Emergency Situations
Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko
Abstract:
Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.Keywords: data mining, emergency phone calls, emotional profiles, rules
Procedia PDF Downloads 408249 Coagulation-flocculation Process with Metal Salts, Synthetic Polymers and Biopolymers for the Removal of Trace Metals (Cu, Pb, Ni, Zn) from Wastewater
Authors: Andrew Hargreaves, Peter Vale, Jonathan Whelan, Carlos Constantino, Gabriela Dotro, Pablo Campo
Abstract:
As a consequence of their potential to cause harm, there are strong regulatory drivers that require metals to be removed as part of the wastewater treatment process. Bioavailability-based standards have recently been specified for copper (Cu), lead (Pb), nickel (Ni) and zinc (Zn) and are expected to reduce acceptable metal concentrations. In order to comply with these standards, wastewater treatment works may require new treatment types to enhance metal removal and it is, therefore, important to examine potential treatment options. A substantial proportion of Cu, Pb, Ni and Zn in effluent is adsorbed to and/or complexed with macromolecules (eg. proteins, polysaccharides, aminosugars etc.) that are present in the colloidal size fraction. Therefore, technologies such as coagulation-flocculation (CF) that are capable of removing colloidal particles have good potential to enhance metals removal from wastewater. The present study investigated the effectiveness of CF at removing trace metals from humus effluent using the following coagulants; ferric chloride (FeCl3), the synthetic polymer polyethyleneimine (PEI), and the biopolymers chitosan and Tanfloc. Effluent samples were collected from a trickling filter treatment works operating in the UK. Using jar tests, the influence of coagulant dosage and the velocity and time of the slow mixing stage were studied. Chitosan and PEI had a limited effect on the removal of trace metals (<35%). FeCl3 removed 48% Cu, 56% Pb and 41% Zn at the recommended dose of 0.10 mg/L. At the recommended dose of 0.25 mg/L Tanfloc removed 77% Cu, 68% Pb, 18% Ni and 42% Zn. The dominant mechanism for particle removal by FeCl3 was enmeshment in the precipitates (i.e. sweep flocculation) whereas, for Tanfloc, inter-particle bridging was the dominant removal mechanism. Overall, FeCl3 and Tanfloc were found to be most effective at removing trace metals from wastewater.Keywords: coagulation-flocculation, jar test, trace metals, wastewater
Procedia PDF Downloads 239248 A Constructed Wetland as a Reliable Method for Grey Wastewater Treatment in Rwanda
Authors: Hussein Bizimana, Osman Sönmez
Abstract:
Constructed wetlands are current the most widely recognized waste water treatment option, especially in developing countries where they have the potential for improving water quality and creating valuable wildlife habitat in ecosystem with treatment requirement relatively simple for operation and maintenance cost. Lack of grey waste water treatment facilities in Kigali İnstitute of Science and Technology in Rwanda, causes pollution in the surrounding localities of Rugunga sector, where already a problem of poor sanitation is found. In order to treat grey water produced at Kigali İnstitute of Science and Technology, with high BOD concentration, high nutrients concentration and high alkalinity; a Horizontal Sub-surface Flow pilot-scale constructed wetland was designed and can operate in Kigali İnstitute of Science and Technology. The study was carried out in a sedimentation tank of 5.5 m x 1.42 m x 1.2 m deep and a Horizontal Sub-surface constructed wetland of 4.5 m x 2.5 m x 1.42 m deep. The grey waste water flow rate of 2.5 m3/d flew through vegetated wetland and sandy pilot plant. The filter media consisted of 0.6 to 2 mm of coarse sand, 0.00003472 m/s of hydraulic conductivity and cattails (Typha latifolia spp) were used as plants species. The effluent flow rate of the plant is designed to be 1.5 m3/ day and the retention time will be 24 hrs. 72% to 79% of BOD, COD, and TSS removals are estimated to be achieved, while the nutrients (Nitrogen and Phosphate) removal is estimated to be in the range of 34% to 53%. Every effluent characteristic will meet exactly the Rwanda Utility Regulatory Agency guidelines primarily because the retention time allowed is enough to make the reduction of contaminants within effluent raw waste water. Treated water reuse system was developed where water will be used in the campus irrigation system again.Keywords: constructed wetlands, hydraulic conductivity, grey waste water, cattails
Procedia PDF Downloads 608247 Screening of Lactic Acid Bacteria Isolated from Traditional Fermented Products: Potential Probiotic Bacteria with Antimicrobial and Cytotoxic Activities
Authors: Genesis Julyus T. Agcaoili, Esperanza C. Cabrera
Abstract:
Thirty (30) isolates of lactic acid bacteria (LAB) from traditionally-prepared fermented products specifically fermented soy-bean paste, fermented mustard and fermented rice-fish mixture were studied for their in vitro antimicrobial and cytotoxic activities. Seventeen (17) isolates were identified as Lactobacillus plantarum, while 13 isolates were identified as Enterococcus spp using 16s rDNA sequences. Disc diffusion method was used to determine the antibacterial activity of LAB against Staphylococcus aureus (ATCC 25923) and Escherichia coli (ATCC 25922), while the modified agar overlay method was used to determine the antifungal activity of LAB isolates on the yeast Candida albicans, and the dermatophytes Microsporum gypseum, Trichophyton rubrum and Epidermophyton floccosum. The filter-sterilized LAB supernatants were evaluated for their cytotoxicity to mammalian colon cancer cell lines (HT-29 and HCT116) and normal human dermal fibrolasts (HDFn) using resazurin assay (PrestoBlueTM). Colchicine was the positive control. No antimicrobial activity was observed against the bacterial test organisms and the yeast Candida albicans. On the other hand, all of the tested LAB strains were fungicidal for all the test dermatophytes. Cytotoxicity index profiles of the supernatants of the 15 randomly picked LABs and negative control (brain heart infussion broth) suggest nontoxicity to the cells when compared to colchicine, whereas all LAB supernatants were found to be cytotoxic to HT-29 and HCT116 colon cancer cell lines. Results provide strong support for the role of the lactic acid bacteria studied in antimicrobial treatment and anticancer therapy.Keywords: antimicrobial, fermented products, fungicidal activity, lactic acid bacteria, probiotics
Procedia PDF Downloads 237246 Assimilating Multi-Mission Satellites Data into a Hydrological Model
Authors: Mehdi Khaki, Ehsan Forootan, Joseph Awange, Michael Kuhn
Abstract:
Terrestrial water storage, as a source of freshwater, plays an important role in human lives. Hydrological models offer important tools for simulating and predicting water storages at global and regional scales. However, their comparisons with 'reality' are imperfect mainly due to a high level of uncertainty in input data and limitations in accounting for all complex water cycle processes, uncertainties of (unknown) empirical model parameters, as well as the absence of high resolution (both spatially and temporally) data. Data assimilation can mitigate this drawback by incorporating new sets of observations into models. In this effort, we use multi-mission satellite-derived remotely sensed observations to improve the performance of World-Wide Water Resources Assessment system (W3RA) hydrological model for estimating terrestrial water storages. For this purpose, we assimilate total water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) and surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) into W3RA. This is done to (i) improve model estimations of water stored in ground and soil moisture, and (ii) assess the impacts of each satellite of data (from GRACE and AMSR-E) and their combination on the final terrestrial water storage estimations. These data are assimilated into W3RA using the Ensemble Square-Root Filter (EnSRF) filtering technique over Mississippi Basin (the United States) and Murray-Darling Basin (Australia) between 2002 and 2013. In order to evaluate the results, independent ground-based groundwater and soil moisture measurements within each basin are used.Keywords: data assimilation, GRACE, AMSR-E, hydrological model, EnSRF
Procedia PDF Downloads 289245 Periurban Landscape as an Opportunity Field to Solve Ecological Urban Conflicts
Authors: Cristina Galiana Carballo, Ibon Doval Martínez
Abstract:
Urban boundaries often result in a controversial limit between countryside and city in Europe. This territory is normally defined by the very limited land uses and the abundance of open space. The dimension and dynamics of peri-urbanization in the last decades have increased this land stock, which has influenced/impacted in several factors in terms of economic costs (maintenance, transport), ecological disturbances of the territory and changes in inhabitant´s behaviour. In an increasingly urbanised world and a growing urban population, cities also face challenges such as Climate Change. In this context, new near-future corrective trends including circular economies for local food supply or decentralised waste management became key strategies towards more sustainable urban models. Those new solutions need to be planned and implemented considering the potential conflict with current land uses. The city of Vitoria-Gasteiz (Basque Country, Spain) has triplicated land consumption per habitant in 10 years, resulting in a vast extension of low-density urban type confronting rural land and threatening agricultural uses, landscape and urban sustainability. Urban planning allows managing and optimum use allocation based on soil vocation and socio-ecosystem needs, while peri-urban space arises as an opportunity for developing different uses which do not match either within the compact city, not in open agricultural lands, such as medium-size agrocomposting systems or biomass plants. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Climate change and circular economy were identified as frameworks where to determine future land, soil vocation and urban planning requirements which eventually become estimations of required local food and renewable energy supply along with alternative waste management system´s implementation. By means of it, it has been developed an urban planning proposal which overcomes urban-non urban dichotomy in Vitoria-Gasteiz. The proposal aims to enhance rural system and improve urban sustainability performance through the normative recognition of an agricultural peri-urban belt.Keywords: landscape ecology, land-use management, periurban, urban planning
Procedia PDF Downloads 163244 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 133243 Simulation of Wet Scrubbers for Flue Gas Desulfurization
Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra
Abstract:
Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.Keywords: desulfurization, discrete phase, scrubber, wall film
Procedia PDF Downloads 263242 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 178241 Magneto-Transport of Single Molecular Transistor Using Anderson-Holstein-Caldeira-Leggett Model
Authors: Manasa Kalla, Narasimha Raju Chebrolu, Ashok Chatterjee
Abstract:
We have studied the quantum transport properties of a single molecular transistor in the presence of an external magnetic field using the Keldysh Green function technique. We also used the Anderson-Holstein-Caldeira-Leggett Model to describe the single molecular transistor that consists of a molecular quantum dot (QD) coupled to two metallic leads and placed on a substrate that acts as a heat bath. The phonons are eliminated by the Lang-Firsov transformation and the effective Hamiltonian is used to study the effect of an external magnetic field on the spectral density function, Tunneling Current, Differential Conductance and Spin polarization. A peak in the spectral function corresponds to a possible excitation. In the presence of a magnetic field, the spin-up and spin-down states are degenerate and this degeneracy is lifted by the magnetic field leading to the splitting of the central peak of the spectral function. The tunneling current decreases with increasing magnetic field. We have observed that even the differential conductance peak in the zero magnetic field curve is split in the presence electron-phonon interaction. As the magnetic field is increased, each peak splits into two peaks. And each peak indicates the existence of an energy level. Thus the number of energy levels for transport in the bias window increases with the magnetic field. In the presence of the electron-phonon interaction, Differential Conductance in general gets reduced and decreases faster with the magnetic field. As magnetic field strength increases, the spin polarization of the current is increasing. Our results show that a strongly interacting QD coupled to metallic leads in the presence of external magnetic field parallel to the plane of QD acts as a spin filter at zero temperature.Keywords: Anderson-Holstein model, Caldeira-Leggett model, spin-polarization, quantum dots
Procedia PDF Downloads 185