Search results for: Coulomb modified Glauber model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18358

Search results for: Coulomb modified Glauber model

10918 Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System

Authors: R. Ramesh, K. K. Shivaraman

Abstract:

The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.

Keywords: convolutional neural network, fuzzy logic, k-means clustering approach, smart home energy management

Procedia PDF Downloads 290
10917 Patterns of Malignant and Benign Breast Lesions in Hail Region: A Retrospective Study at King Khalid Hospital

Authors: Laila Seada, Ashraf Ibrahim, Amjad Al Shammari

Abstract:

Background and Objectives: Breast carcinoma is the most common cancer of females in Hail region, accounting for 31% of all diagnosed cancer cases followed by thyroid carcinoma (25%) and colorectal carcinoma (13%). Methods: In the present retrospective study, all cases of breast lesions received at the histopathology department in King Khalid Hospital, Hail, during the period from May 2011 to April 2016 have been retrieved from department files. For all cases, a trucut biopsy, lumpectomy, or modified radical mastectomy was available for histopathologic diagnosis, while 105/140 (75%) had, as well, preoperative fine needle aspirates (FNA). Results: 49 cases out of 140 (35%) breast lesions were carcinomas: 44/49 (89.75%) was invasive ductal, 2/49(4.1%) invasive lobular carcinomas, 1/49(2.05%) intracystic low grade papillary carcinoma and 2/49 (4.1%) ductal carcinoma in situ (DCIS). Mean age for malignant cases was 45.06 (+/-10.58): 32.6% were below the age of 40 and 30.6 below 50 years, 18.3% below 60 and 16.3% below 70 years. For the benign group, mean age was 32.52 (+/10.5) years. Benign lesions were in order of frequency: 34 fibroadenomas, 14 fibrocystic disease, 12 chronic mastitis, five granulomatous mastitis, three intraductal papillomas, and three benign phyllodes tumor. Tubular adenoma, lipoma, skin nevus, pilomatrixoma, and breast reduction specimens constituted the remaining specimens. Conclusion: Breast lesions are common in our series and invasive carcinoma accounts for more than 1/3rd of the lumps, with 63.2% incidence in pre-menopausal ladies, below the age of 50 years. FNA as a non-invasive procedure, proved to be an effective tool in diagnosing both benign and malignant/suspicious breast lumps and should continue to be used as a first assessment line of palpable breast masses.

Keywords: age incidence, breast carcinoma, fine needle aspiration, hail region

Procedia PDF Downloads 257
10916 White Light Emitting Carbon Dots- Surface Modification of Carbon Dots Using Auxochromes

Authors: Manasa Perikala, Asha Bhardwaj

Abstract:

Fluorescent carbon dots (CDs), a young member of Carbon nanomaterial family, has gained a lot of research attention across the globe due to its highly luminescent emission properties, non-toxic behavior, stable emission properties, and zero re-absorption lose. These dots have the potential to replace the use of traditional semiconductor quantum dots in light-emitting devices (LED’s, fiber lasers) and other photonic devices (temperature sensor, UV detector). However, One major drawback of Carbon dots is that, till date, the actual mechanism of photoluminescence (PL) in carbon dots is still an open topic of discussion among various researchers across the globe. PL mechanism of CDs based on wide particle size distribution, the effect of surface groups, hybridization in carbon, and charge transfer mechanisms have been proposed. Although these mechanisms explain PL of CDs to an extent, no universally accepted mechanism to explain complete PL behavior of these dots is put forth. In our work, we report parameters affecting the size and surface of CDs, such as time of the reaction, synthesis temperature and concentration of precursors and their effects on the optical properties of the carbon dots. The effect of auxochromes on the emission properties and re-modification of carbon surface using an external surface functionalizing agent is discussed in detail. All the explanations have been supported by UV-Visible absorption, emission spectroscopies, Fourier transform infrared spectroscopy and Transmission electron microscopy and X-Ray diffraction techniques. Once the origin of PL in CDs is understood, parameters affecting PL centers can be modified to tailor the optical properties of these dots, which can enhance their applications in the fabrication of LED’s and other photonic devices out of these carbon dots.

Keywords: carbon dots, photoluminescence, size effects on emission in CDs, surface modification of carbon dots

Procedia PDF Downloads 119
10915 Design and Synthesis of Copper-Zeolite Composite for Antimicrobial Activity and Heavy Metal Removal From Waste Water

Authors: Feleke Terefe Fanta

Abstract:

Background: The existence of heavy metals and coliform bacteria contaminants in aquatic system of Akaki river basin, a sub city of Addis Ababa, Ethiopia has become a public concern as human population increases and land development continues. Hence, it is the right time to design treatment technologies that can handle multiple pollutants. Results: In this study, we prepared a synthetic zeolites and copper doped zeolite composite adsorbents as cost effective and simple approach to simultaneously remove heavy metals and total coliforms from wastewater of Akaki river. The synthesized copper–zeolite X composite was obtained by ion exchange method of copper ions into zeolites frameworks. Iodine test, XRD, FTIR and autosorb IQ automated gas sorption analyzer were used to characterize the adsorbents. The mean concentrations of Cd, Cr, and Pb in untreated sample were 0.795, 0.654 and 0.7025 mg/L respectively. These concentrations decreased to Cd (0.005 mg/L), Cr (0.052 mg/L) and Pb (bellow detection limit, BDL) for sample treated with bare zeolite X while a further decrease in concentration of Cd (0.005 mg/L), Cr (BDL) and Pb (BDL) was observed for the sample treated with copper–zeolite composite. Zeolite X and copper-modified zeolite X showed complete elimination of total coliforms after 90 and 50 min contact time respectively. Conclusion: The results obtained in this study showed high antimicrobial disinfection and heavy metal removal efficiencies of the synthesized adsorbents. Furthermore, these sorbents are efficient in significantly reducing physical parameters such as electrical conductivity, turbidity, BOD and COD.

Keywords: WASTE WATER, COPPER DOPED ZEOITE X, ADSORPITION, HEAVY METAL, DISINFECTION, AKAKI RIVER

Procedia PDF Downloads 45
10914 Thin-Film Nanocomposite Membrane with Single-Walled Carbon Nanotubes Axial Positioning in Support Layer for Desalination of Water

Authors: Ahmed A. Alghamdi

Abstract:

Single-walled carbon nanotubes (SWCNTs) are an outstanding material for applications in thermoelectric power generation, nanoelectronics, electrochemical energy storage, photovoltaics, and light emission. They are ultra-lightweight and possess electrical as well as thermal conductivity, flexibility, and mechanical strength. SWCNT is applicable in water treatment, brine desalination, removal of heavy metal ions associated with pollutants, and oil-water separation. Carbon nanotube (CNT) is believed to tackle the trade-off issue between permeability, selectivity, and fouling issues in membrane filtration applications. Studying these CNT structures, as well as their interconnection in nanotechnology, assists in finding the precise position to be placed for water desalination. Reverse osmosis (RO) has been used globally for desalination, resulting in purified water. Thin film composite (TFC) membranes were utilized in the RO process for desalination. The sheet thickness increases the salt rejection and decreases the water flux when CNT is utilized as a support layer to this membrane. Thus, through a temperature-induced phase separation technique (TIPS), axially aligned SWCNT (AASWCNT) is fabricated, and its use enhances the salt rejection and water flux at short reaction times with a modified procedure. An evaluation was conducted and analogized with prior works in the literature, which exhibited that the prepared TFC membrane showed a better outcome.

Keywords: single-walled carbon nanotubes, thin film composite, axially aligned swcnt, temperature induced phase separation technique, reverse osmosis

Procedia PDF Downloads 39
10913 Correlation and Prediction of Biodiesel Density

Authors: Nieves M. C. Talavera-Prieto, Abel G. M. Ferreira, António T. G. Portugal, Rui J. Moreira, Jaime B. Santos

Abstract:

The knowledge of biodiesel density over large ranges of temperature and pressure is important for predicting the behavior of fuel injection and combustion systems in diesel engines, and for the optimization of such systems. In this study, cottonseed oil was transesterified into biodiesel and its density was measured at temperatures between 288 K and 358 K and pressures between 0.1 MPa and 30 MPa, with expanded uncertainty estimated as ±1.6 kg.m^-3. Experimental pressure-volume-temperature (pVT) cottonseed data was used along with literature data relative to other 18 biodiesels, in order to build a database used to test the correlation of density with temperarure and pressure using the Goharshadi–Morsali–Abbaspour equation of state (GMA EoS). To our knowledge, this is the first that density measurements are presented for cottonseed biodiesel under such high pressures, and the GMA EoS used to model biodiesel density. The new tested EoS allowed correlations within 0.2 kg•m-3 corresponding to average relative deviations within 0.02%. The built database was used to develop and test a new full predictive model derived from the observed linear relation between density and degree of unsaturation (DU), which depended from biodiesel FAMEs profile. The average density deviation of this method was only about 3 kg.m-3 within the temperature and pressure limits of application. These results represent appreciable improvements in the context of density prediction at high pressure when compared with other equations of state.

Keywords: biodiesel density, correlation, equation of state, prediction

Procedia PDF Downloads 595
10912 Representational Issues in Learning Solution Chemistry at Secondary School

Authors: Lam Pham, Peter Hubber, Russell Tytler

Abstract:

Students’ conceptual understandings of chemistry concepts/phenomena involve capability to coordinate across the three levels of Johnston’s triangle model. This triplet model is based on reasoning about chemical phenomena across macro, sub-micro and symbolic levels. In chemistry education, there is a need for further examining inquiry-based approaches that enhance students’ conceptual learning and problem solving skills. This research adopted a directed inquiry pedagogy based on students constructing and coordinating representations, to investigate senior school students’ capabilities to flexibly move across Johnston’ levels when learning dilution and molar concentration concepts. The participants comprise 50 grade 11 and 20 grade 10 students and 4 chemistry teachers who were selected from 4 secondary schools located in metropolitan Melbourne, Victoria. This research into classroom practices used ethnographic methodology, involved teachers working collaboratively with the research team to develop representational activities and lesson sequences in the instruction of a unit on solution chemistry. The representational activities included challenges (Representational Challenges-RCs) that used ‘representational tools’ to assist students to move across Johnson’s three levels for dilution phenomena. In this report, the ‘representational tool’ called ‘cross and portion’ model was developed and used in teaching and learning the molar concentration concept. Students’ conceptual understanding and problem solving skills when learning with this model are analysed through group case studies of year 10 and 11 chemistry students. In learning dilution concepts, students in both group case studies actively conducted a practical experiment, used their own language and visualisation skills to represent dilution phenomena at macroscopic level (RC1). At the sub-microscopic level, students generated and negotiated representations of the chemical interactions between solute and solvent underpinning the dilution process. At the symbolic level, students demonstrated their understandings about dilution concepts by drawing chemical structures and performing mathematical calculations. When learning molar concentration with a ‘cross and portion’ model (RC2), students coordinated across visual and symbolic representational forms and Johnson’s levels to construct representations. The analysis showed that in RC1, Year 10 students needed more ‘scaffolding’ in inducing to representations to explicit the form and function of sub-microscopic representations. In RC2, Year 11 students showed clarity in using visual representations (drawings) to link to mathematics to solve representational challenges about molar concentration. In contrast, year 10 students struggled to get match up the two systems, symbolic system of mole per litre (‘cross and portion’) and visual representation (drawing). These conceptual problems do not lie in the students’ mathematical calculation capability but rather in students’ capability to align visual representations with the symbolic mathematical formulations. This research also found that students in both group case studies were able to coordinate representations when probed about the use of ‘cross and portion’ model (in RC2) to demonstrate molar concentration of diluted solutions (in RC1). Students mostly succeeded in constructing ‘cross and portion’ models to represent the reduction of molar concentration of the concentration gradients. In conclusion, this research demonstrated how the strategic introduction and coordination of chemical representations across modes and across the macro, sub-micro and symbolic levels, supported student reasoning and problem solving in chemistry.

Keywords: cross and portion, dilution, Johnston's triangle, molar concentration, representations

Procedia PDF Downloads 127
10911 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 207
10910 Implementation of the Canadian Emergency Department Triage and Acuity Scale (CTAS) in an Urgent Care Center in Saudi Arabia

Authors: Abdullah Arafat, Ali Al-Farhan, Amir Omair

Abstract:

Objectives: To review and assess the effectiveness of the implemented modified five-levels triage and acuity scale triage system in AL-Yarmook Urgent Care Center (UCC), King Abdulaziz Residential city, Riyadh, Saudi Arabia. Method: The applied study design was an observational cross sectional design. A data collection sheet was designed and distributed to triage nurses; the data collection was done during triage process and was directly observed by the co-investigator. Triage system was reviewed by measuring three time intervals as quality indicators: time before triage (TBT), time before being seen by physician (TBP) and total length of stay (TLS) taking in consideration timing of presentation and level of triage. Results: During the study period, a total of 187 patients were included in our study. 118 visits were at weekdays and 68 visits at weekends. Overall, 173 patients (92.5%) were seen by the physician in timely manner according to triage guidelines while 14 patients (7.5%) were not seen at appropriate time.Overall, The mean time before seen the triage nurse (TBT) was 5.36 minutes, the mean time to be seen by physician (TBP) was 22.6 minutes and the mean length of stay (TLS) was 59 minutes. The data didn’t showed significant increase in TBT, TBP, and number of patients not seen at the proper time, referral rate and admission rate during weekend. Conclusion: The CTAS is adaptable to countries beyond Canada and worked properly. The applied CTAS triage system in Al-Yarmook UCC is considered to be effective and well applied. Overall, urgent cases have been seen by physician in timely manner according to triage system and there was no delay in the management of urgent cases.

Keywords: CTAS, emergency, Saudi Arabia, triage, urgent care

Procedia PDF Downloads 310
10909 The Projections of Urban Climate Change Using Conformal Cubic Atmospheric Model in Bali, Indonesia

Authors: Laras Tursilowati, Bambang Siswanto

Abstract:

Urban climate change has short- and long-term implications for decision-makers in urban development. The problem for this important metropolitan regional of population and economic value is that there is very little usable information on climate change. Research about urban climate change has been carried out in Bali Indonesia by using Conformal Cubic Atmospheric Model (CCAM) that runs with Representative Concentration Pathway (RCP)4.5. The history data means average data from 1975 to 2005, climate projections with RCP4.5 scenario means average data from 2006 to 2099, and anomaly (urban climate change) is RCP4.5 minus history. The results are the history of temperature between 22.5-27.5 OC, and RCP4.5 between 25.5-29.5 OC. The temperature anomalies can be seen in most of northern Bali that increased by about 1.6 to 2.9 OC. There is a reduced humidity tendency (drier) in most parts of Bali, especially the northern part of Bali, while a small portion in the south increase moisture (wetter). The comfort index of Bali region in history is still relatively comfortable (20-26 OC), but on the condition RCP4.5 there is no comfortable area with index more than 26 OC (hot and dry). This research is expected to be useful to help the government make good urban planning.

Keywords: CCAM, comfort index, IPCC AR5, temperature, urban climate change

Procedia PDF Downloads 128
10908 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 339
10907 Design of Black-Seed Pulp biomass-Derived New Bio-Sorbent by Combining Methods of Mineral Acids and High-Temperature for Arsenic Removal

Authors: Mozhgan Mohammadi, Arezoo Ghadi

Abstract:

Arsenic is known as a potential threat to the environment. Therefore, the aim of this research is to assess the arsenic removal efficiency from an aqueous solution, with a new biosorbent composed of a black seed pulp (BSP). To treat BSP, the combination of two methods (i.e. treating with mineral acids and use at high temperature) was used and designed bio-sorbent called BSP-activated/carbonized. The BSP-activated and BSP-carbonized were also prepared using HCL and 400°C temperature, respectively, to compare the results of each three methods. Followed by, adsorption parameters such as pH, initial ion concentration, biosorbent dosage, contact time, and temperature were assessed. It was found that the combination method has provided higher adsorption capacity so that up to ~99% arsenic removal was observed with BSP-activated/carbonized at pH of 7.0 and 40°C. The adsorption capacity for BSP-carbonized and BSP-activated were 87.92% (pH: 7, 60°C) and 78.50% (pH: 6, 90°C), respectively. Moreover, adsorption kinetics data indicated the best fit with the pseudo-second-order model. The maximum biosorption capacity, by the Langmuir isotherm model, was also recorded for BSP-activated/carbonized (53.47 mg/g). It is notable that arsenic adsorption on studied bio sorbents takes place as spontaneous and through chemisorption along with the endothermic nature of the biosorption process and reduction of random collision in the solid-liquid phase.

Keywords: black seed pulp, bio-sorbents, treatment of sorbents, adsorption isotherms

Procedia PDF Downloads 80
10906 Structural Equation Modeling Approach: Modeling the Impact of Social Marketing Programs on Combating Female Genital Mutilation in the Sudanese Society

Authors: Nada Abdelsadig Moahamed Saied

Abstract:

Female Genital Mutilation (FGM) and other similar traditional cultural practices pose a significant problem for Sudanese society. Such actions are severe and seriously detrimental to people's health since they are based on false social perceptions. To address these problems, numerous institutions and organizations were compelled to act rapidly. Female circumcision, or FGM, is one of the riskiest practices. It is referred to as the excision of the genitalia. Any surgeries involving the total or partial removal of the external female genitalia for non-medical reasons fall under this category. The results of FGM can vary depending on the kind and degree of the operation. These can be categorized as short-term, mid-term, or long-term issues. Infections, including the Human, blood, discomfort, and difficulty urinating are the immediate effects. FGM is defined by the World Health Organization (WHO) as practices that purposefully damage or modify female genital organs for non-medical purposes. It often takes place between the ages of one and fifteen. The girl's right to decide on important choices affecting her sexual and reproductive health is violated because the act is usually performed without her consent and frequently against her will. UNICEF, the United Nations International Children's Emergency Fund, aggressively combats the issue of FGM in Sudan. Numerous programs were started by NGOs to stop the practice. To our knowledge, no scientific study has been conducted to evaluate the effects of such social marketing techniques on simulating and comprehending society’s feelings surrounding FGM. This study proposes the development of a structural equation model aiming to determine the impact of awareness programs on people’s intentions to adopt the behavior of abandoning FGM based on theoretical models of behavior change. The model incorporates all the relevant factors that contribute to FGM and possible strategic actions to tackle this problem. The theoretical backdrop for FGM is presented in the next section, which also explains the practice's history, justifications, and potential treatments. The methodology section that follows describes the structural equation model. The proposed model, which compiles all the pertinent elements into a single image, is presented in the fourth part. Finally, conclusions are reached, and suggestions for further research are made.

Keywords: social marketing, policy-making, behavioral change, female genital mutilation, culture

Procedia PDF Downloads 63
10905 CFD Analysis of Flow Regimes of Non-Newtonian Liquids in Chemical Reactor

Authors: Nenashev Yaroslav, Russkin Oleg

Abstract:

The mixing process is one of the most important and critical stages in many industrial sectors, such as chemistry, pharmaceuticals, and the food industry. When designing equipment with mixing impellers, technology developers often encounter working environments with complex physical properties and rheology. In such cases, the use of computational fluid dynamics tools is an excellent solution to mitigate risks and ensure the stable operation of the equipment. The research focuses on one of the designed reactors with mixing impellers intended for polymer synthesis. The study describes an approach to modeling reactors of similar configurations, taking into account the complex properties of the mixed liquids using the computational fluid dynamics (CFD) method. To achieve this goal, a complex 3D model was created, accurately replicating the functionality of chemical equipment. The model allows for the assessment of the hydrodynamic behavior of the reaction mixture inside the reactor, consideration of heat release due to the reaction, and the heat exchange between the reaction mixture and the cooling medium. The results indicate that the choice of the type and size of the mixing device significantly affects the efficiency of the mixing process inside the chemical reactor.

Keywords: CFD, mixing, blending, chemical reactor, non-Newton liquids, polymers

Procedia PDF Downloads 8
10904 Educational Data Mining: The Case of the Department of Mathematics and Computing in the Period 2009-2018

Authors: Mário Ernesto Sitoe, Orlando Zacarias

Abstract:

University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.

Keywords: evasion and retention, cross-validation, bagging, stacking

Procedia PDF Downloads 67
10903 Analysis of Wire Coating for Heat Transfer Flow of a Viscoelastic PTT Fluid with Slip Boundary Conditions

Authors: Rehan Ali Shah, A. M. Siddiqui, T. Haroon

Abstract:

Slip boundary value problem in wire coating analysis with heat transfer is examined. The fluid is assumed to be viscoelastic PTT (Phan-Thien and Tanner). The rheological constitutive equation of PTT fluid model simulates various polymer melts. Therefore, the current consequences are valuable in a number of realistic situations. Effects of slip parameter γ as well as εDec^2 (viscoelastic index) on the axial velocity, shear stress, normal stress, average velocity, volume flux, thickness of coated wire, shear stress, force on the total wire and temperature distribution profiles have been investigated. A new direction is explored to analyze the flow with the slip parameter. The slippage at the boundaries plays an important role in thickness of coated wire. It is noted that as the slip parameter increases the flow rate and thickness of coated wire increases while, temperature distribution decreases. The results reduce to no slip when the slip parameter is vanished. Furthermore, we can obtain the results for Maxwell and viscous model by setting ε and λ equal to zero respectively.

Keywords: wire coating, straight annular die, PTT fluid, heat transfer, slip boundary conditions

Procedia PDF Downloads 346
10902 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 406
10901 Training AI to Be Empathetic and Determining the Psychotype of a Person During a Conversation with a Chatbot

Authors: Aliya Grig, Konstantin Sokolov, Igor Shatalin

Abstract:

The report describes the methodology for collecting data and building an ML model for determining the personality psychotype using profiling and personality traits methods based on several short messages of a user communicating on an arbitrary topic with a chitchat bot. In the course of the experiments, the minimum amount of text was revealed to confidently determine aspects of personality. Model accuracy - 85%. Users' language of communication is English. AI for a personalized communication with a user based on his mood, personality, and current emotional state. Features investigated during the research: personalized communication; providing empathy; adaptation to a user; predictive analytics. In the report, we describe the processes that captures both structured and unstructured data pertaining to a user in large quantities and diverse forms. This data is then effectively processed through ML tools to construct a knowledge graph and draw inferences regarding users of text messages in a comprehensive manner. Specifically, the system analyzes users' behavioral patterns and predicts future scenarios based on this analysis. As a result of the experiments, we provide for further research on training AI models to be empathetic, creating personalized communication for a user

Keywords: AI, empathetic, chatbot, AI models

Procedia PDF Downloads 74
10900 Application of Adaptive Neuro Fuzzy Inference Systems Technique for Modeling of Postweld Heat Treatment Process of Pressure Vessel Steel AASTM A516 Grade 70

Authors: Omar Al Denali, Abdelaziz Badi

Abstract:

The ASTM A516 Grade 70 steel is a suitable material used for the fabrication of boiler pressure vessels working in moderate and lower temperature services, and it has good weldability and excellent notch toughness. The post-weld heat treatment (PWHT) or stress-relieving heat treatment has significant effects on avoiding the martensite transformation and resulting in high hardness, which can lead to cracking in the heat-affected zone (HAZ). An adaptive neuro-fuzzy inference system (ANFIS) was implemented to predict the material tensile strength of post-weld heat treatment (PWHT) experiments. The ANFIS models presented excellent predictions, and the comparison was carried out based on the mean absolute percentage error between the predicted values and the experimental values. The ANFIS model gave a Mean Absolute Percentage Error of 0.556 %, which confirms the high accuracy of the model.

Keywords: prediction, post-weld heat treatment, adaptive neuro-fuzzy inference system, mean absolute percentage error

Procedia PDF Downloads 138
10899 Positive Psychology and the Social Emotional Ability Instrument (SEAI)

Authors: Victor William Harris

Abstract:

This research is a validation study of the Social Emotional Ability Inventory (SEAI), a multi-dimensional self-report instrument informed by positive psychology, emotional intelligence, social intelligence, and sociocultural learning theory. Designed for use in tandem with the Social Emotional Development (SEAD) theoretical model, the SEAI provides diagnostic-level guidance for professionals and individuals interested in investigating, identifying, and understanding social, emotional strengths, as well as remediating specific social competency deficiencies. The SEAI was shown to be psychometrically sound, exhibited strong internal reliability, and supported the a priori hypotheses of the SEAD. Additionally, confirmatory factor analysis provided evidence of goodness of fit, convergent and divergent validity, and supported a theoretical model that reflected SEAD expectations. The SEAI and SEAD hold potentially far-reaching and important practical implications for theoretical guidance and diagnostic-level measurement of social, emotional competency across a wide range of domains. Strategies researchers, practitioners, educators, and individuals might use to deploy SEAI in order to improve quality of life outcomes are discussed.

Keywords: emotion, emotional ability, positive psychology-social emotional ability, social emotional ability, social emotional ability instrument

Procedia PDF Downloads 225
10898 A Model of Human Security: A Comparison of Vulnerabilities and Timespace

Authors: Anders Troedsson

Abstract:

For us humans, risks are intimately linked to human vulnerabilities - where there is vulnerability, there is potentially insecurity, and risk. Reducing vulnerability through compensatory measures means increasing security and decreasing risk. The paper suggests that a meaningful way to approach the study of risks (including threats, assaults, crisis etc.), is to understand the vulnerabilities these external phenomena evoke in humans. As is argued, the basis of risk evaluation, as well as responses, is the more or less subjective perception by the individual person, or a group of persons, exposed to the external event or phenomena in question. This will be determined primarily by the vulnerability or vulnerabilities that the external factor are perceived to evoke. In this way, risk perception is primarily an inward dynamic, rather than an outward one. Therefore, a route towards an understanding of the perception of risks, is a closer scrutiny of the vulnerabilities which they can evoke, thereby approaching an understanding of what in the paper is called the essence of risk (including threat, assault etc.), or that which a certain perceived risk means to an individual or group of individuals. As a necessary basis for gauging the wide spectrum of potential risks and their meaning, the paper proposes a model of human vulnerabilities, drawing from i.a. a long tradition of needs theory. In order to account for the subjectivity factor, which mediates between the innate vulnerabilities on the one hand, and the event or phenomenon out there on the other hand, an ensuing ontological discussion about the timespace characteristics of risk/threat/assault as perceived by humans leads to the positing of two dimensions. These two dimensions are applied on the vulnerabilities, resulting in a modelling effort featuring four realms of vulnerabilities which are related to each other and together represent a dynamic whole. In approaching the problem of risk perception, the paper thus defines the relevant realms of vulnerabilities, depicting them as a dynamic whole. With reference to a substantial body of literature and a growing international policy trend since the 1990s, this model is put in the language of human security - a concept relevant not only for international security studies and policy, but also for other academic disciplines and spheres of human endeavor.

Keywords: human security, timespace, vulnerabilities, risk perception

Procedia PDF Downloads 318
10897 Compromising Relevance for Elegance: A Danger of Dominant Growth Models for Backward Economies

Authors: Givi Kupatadze

Abstract:

Backward economies are facing a challenge of achieving sustainable high economic growth rate. Dominant growth models represent a roadmap in framing economic development strategy. This paper examines a relevance of the dominant growth models for backward economies. Cobb-Douglas production function, the Harrod-Domar model of economic growth, the Solow growth model and general formula of gross domestic product are examined to undertake a comprehensive study of the dominant growth models. Deductive research method allows to uncover major weaknesses of the dominant growth models and to come up with practical implications for economic development strategy. The key finding of the paper shows, contrary to what used to be taught by textbooks of economics, that constant returns to scale property of the dominant growth models are a mere coincidence and its generalization over space and time can be regarded as one of the most unfortunate mistakes in the whole field of political economy. The major suggestion of the paper for backward economies is that understanding and considering taxonomy of economic activities based on increasing and diminishing returns to scale represent a cornerstone of successful economic development strategy.

Keywords: backward economies, constant returns to scale, dominant growth models, taxonomy of economic activities

Procedia PDF Downloads 357
10896 A Collaborative Learning Model in Engineering Science Based on a Cyber-Physical Production Line

Authors: Yosr Ghozzi

Abstract:

The Cyber-Physical Systems terminology has been well received by the industrial community and specifically appropriated in educational settings. Indeed, our latest educational activities are based on the development of experimental platforms on an industrial scale. In fact, we built a collaborative learning model because of an international market study that led us to place ourselves at the heart of this technology. To align with these findings, a competency-based approach study was conducted, and program content was revised by reflecting the projectbased approach. Thus, this article deals with the development of educational devices according to a generated curriculum and specific educational activities while respecting the repository of skills adopted from what constitutes the educational cyber-physical production systems and the laboratories that are compliant and adapted to them. The implementation of these platforms was systematically carried out in the school's workshops spaces. The objective has been twofold, both research and teaching for the students in mechatronics and logistics of the electromechanical department. We act as trainers and industrial experts to involve students in the implementation of possible extension systems around multidisciplinary projects and reconnect with industrial projects for better professional integration.

Keywords: education 4.0, competency-based learning, teaching factory, project-based learning, cyber-physical systems, industry 4.0

Procedia PDF Downloads 82
10895 Investigation of the Physical Computing in Computational Thinking Practices, Computer Programming Concepts and Self-Efficacy for Crosscutting Ideas in STEM Content Environments

Authors: Sarantos Psycharis

Abstract:

Physical Computing, as an instructional model, is applied in the framework of the Engineering Pedagogy to teach “transversal/cross-cutting ideas” in a STEM content approach. Labview and Arduino were used in order to connect the physical world with real data in the framework of the so called Computational Experiment. Tertiary prospective engineering educators were engaged during their course and Computational Thinking (CT) concepts were registered before and after the intervention across didactic activities using validated questionnaires for the relationship between self-efficacy, computer programming, and CT concepts when STEM content epistemology is implemented in alignment with the Computational Pedagogy model. Results show a significant change in students’ responses for self-efficacy for CT before and after the instruction. Results also indicate a significant relation between the responses in the different CT concepts/practices. According to the findings, STEM content epistemology combined with Physical Computing should be a good candidate as a learning and teaching approach in university settings that enhances students’ engagement in CT concepts/practices.

Keywords: arduino, computational thinking, computer programming, Labview, self-efficacy, STEM

Procedia PDF Downloads 100
10894 Comparison between Some of Robust Regression Methods with OLS Method with Application

Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq

Abstract:

The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.

Keywords: Robest, LTS, M estimate, MSE

Procedia PDF Downloads 221
10893 Using Deep Learning Neural Networks and Candlestick Chart Representation to Predict Stock Market

Authors: Rosdyana Mangir Irawan Kusuma, Wei-Chun Kao, Ho-Thi Trang, Yu-Yen Ou, Kai-Lung Hua

Abstract:

Stock market prediction is still a challenging problem because there are many factors that affect the stock market price such as company news and performance, industry performance, investor sentiment, social media sentiment, and economic factors. This work explores the predictability in the stock market using deep convolutional network and candlestick charts. The outcome is utilized to design a decision support framework that can be used by traders to provide suggested indications of future stock price direction. We perform this work using various types of neural networks like convolutional neural network, residual network and visual geometry group network. From stock market historical data, we converted it to candlestick charts. Finally, these candlestick charts will be feed as input for training a convolutional neural network model. This convolutional neural network model will help us to analyze the patterns inside the candlestick chart and predict the future movements of the stock market. The effectiveness of our method is evaluated in stock market prediction with promising results; 92.2% and 92.1 % accuracy for Taiwan and Indonesian stock market dataset respectively.

Keywords: candlestick chart, deep learning, neural network, stock market prediction

Procedia PDF Downloads 422
10892 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data

Authors: N. Borjalilu, P. Rabiei, A. Enjoo

Abstract:

Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.

Keywords: F-topsis, fuzzy set, flight data monitoring (FDM), flight safety

Procedia PDF Downloads 151
10891 One-Shot Text Classification with Multilingual-BERT

Authors: Hsin-Yang Wang, K. M. A. Salam, Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu Kao

Abstract:

Detecting user intent from natural language expression has a wide variety of use cases in different natural language processing applications. Recently few-shot training has a spike of usage on commercial domains. Due to the lack of significant sample features, the downstream task performance has been limited or leads to an unstable result across different domains. As a state-of-the-art method, the pre-trained BERT model gathering the sentence-level information from a large text corpus shows improvement on several NLP benchmarks. In this research, we are proposing a method to change multi-class classification tasks into binary classification tasks, then use the confidence score to rank the results. As a language model, BERT performs well on sequence data. In our experiment, we change the objective from predicting labels into finding the relations between words in sequence data. Our proposed method achieved 71.0% accuracy in the internal intent detection dataset and 63.9% accuracy in the HuffPost dataset. Acknowledgment: This work was supported by NCKU-B109-K003, which is the collaboration between National Cheng Kung University, Taiwan, and SoftBank Corp., Tokyo.

Keywords: OSML, BERT, text classification, one shot

Procedia PDF Downloads 87
10890 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 25
10889 Optimization of Thermopile Sensor Performance of Polycrystalline Silicon Film

Authors: Li Long, Thomas Ortlepp

Abstract:

A theoretical model for the optimization of thermopile sensor performance is developed for thermoelectric-based infrared radiation detection. It is shown that the performance of polycrystalline silicon film thermopile sensor can be optimized according to the thermoelectric quality factor, sensor layer structure factor, and sensor layout geometrical form factor. Based on the properties of electrons, phonons, grain boundaries, and their interactions, the thermoelectric quality factor of polycrystalline silicon is analyzed with the relaxation time approximation of the Boltzmann transport equation. The model includes the effect of grain structure, grain boundary trap properties, and doping concentration. The layer structure factor is analyzed with respect to the infrared absorption coefficient. The optimization of layout design is characterized by the form factor, which is calculated for different sensor designs. A double-layer polycrystalline silicon thermopile infrared sensor on a suspended membrane has been designed and fabricated with a CMOS-compatible process. The theoretical approach is confirmed by measurement results.

Keywords: polycrystalline silicon, relaxation time approximation, specific detectivity, thermal conductivity, thermopile infrared sensor

Procedia PDF Downloads 117