Search results for: three step search
1682 Process Assessment Model for Process Capability Determination Based on ISO/IEC 20000-1:2011
Authors: Harvard Najoan, Sarwono Sutikno, Yusep Rosmansyah
Abstract:
Most enterprises are now using information technology services as their assets to support business objectives. These kinds of services are provided by the internal service provider (inside the enterprise) or external service provider (outside enterprise). To deliver quality information technology services, the service provider (which from now on will be called ‘organization’) either internal or external, must have a standard for service management system. At present, the standard that is recognized as best practice for service management system for the organization is international standard ISO/IEC 20000:2011. The most important part of this international standard is the first part or ISO/IEC 20000-1:2011-Service Management System Requirement, because it contains 22 for organization processes as a requirement to be implemented in an organizational environment in order to build, manage and deliver quality service to the customer. Assessing organization management processes is the first step to implementing ISO/IEC 20000:2011 into the organization management processes. This assessment needs Process Assessment Model (PAM) as an assessment instrument. PAM comprises two parts: Process Reference Model (PRM) and Measurement Framework (MF). PRM is built by transforming the 22 process of ISO/IEC 20000-1:2011 and MF is based on ISO/IEC 33020. This assessment instrument was designed to assess the capability of service management process in Divisi Teknologi dan Sistem Informasi (Information Systems and Technology Division) as an internal organization of PT Pos Indonesia. The result of this assessment model can be proposed to improve the capability of service management system.Keywords: ISO/IEC 20000-1:2011, ISO/IEC 33020:2015, process assessment, process capability, service management system
Procedia PDF Downloads 4651681 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 3091680 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 2521679 A Study on the Treatment of Municipal Waste Water Using Sequencing Batch Reactor
Authors: Bhaven N. Tandel, Athira Rajeev
Abstract:
Sequencing batch reactor process is a suspended growth process operating under non-steady state conditions which utilizes a fill and draw reactor with complete mixing during the batch reaction step (after filling) and where the subsequent steps of aeration and clarification occur in the same tank. All sequencing batch reactor systems have five steps in common, which are carried out in sequence as follows, (1) fill (2) react (3) settle (sedimentation/clarification) (4) draw (decant) and (5) idle. The study was carried out in a sequencing batch reactor of dimensions 44cmx30cmx70cm with a working volume of 40 L. Mechanical stirrer of 100 rpm was used to provide continuous mixing in the react period and oxygen was supplied by fish tank aerators. The duration of a complete cycle of sequencing batch reactor was 8 hours. The cycle period was divided into different phases in sequence as follows-0.25 hours fill phase, 6 hours react period, 1 hour settling phase, 0.5 hours decant period and 0.25 hours idle phase. The study consisted of two runs, run 1 and run 2. Run 1 consisted of 6 hours aerobic react period and run 2 consisted of 3 hours aerobic react period followed by 3 hours anoxic react period. The influent wastewater used for the study had COD, BOD, NH3-N and TKN concentrations of 308.03±48.94 mg/L, 100.36±22.05 mg/L, 14.12±1.18 mg/L, and 24.72±2.21 mg/L respectively. Run 1 had an average COD removal efficiency of 41.28%, BOD removal efficiency of 56.25%, NH3-N removal efficiency of 86.19% and TKN removal efficiency of 54.4%. Run 2 had an average COD removal efficiency of 63.19%, BOD removal efficiency of 73.85%, NH3-N removal efficiency of 90.74% and TKN removal efficiency of 65.25%. It was observed that run 2 gave better performance than run 1 in the removal of COD, BOD and TKN.Keywords: municipal waste water, aerobic, anoxic, sequencing batch reactor
Procedia PDF Downloads 5501678 Development of Orthogonally Protected 2,1':4,6-Di-O-Diisopropylidene Sucrose as the Versatile Intermediate for Diverse Synthesis of Phenylpropanoid Sucrose Esters
Authors: Li Lin Ong, Duc Thinh Khong, Zaher M. A. Judeh
Abstract:
Phenylpropanoid sucrose esters (PSEs) are natural compounds found in various medicinal plants which exhibit important biological activities such as antiproliferation and α- and β-glucosidase inhibitory activities. Despite their potential as new therapeutics, total synthesis of PSEs has been very limited as their inherent structures contain one or more (substituted) cinnamoyl groups randomly allocated on the sucrose core via ester linkage. Since direct acylation of unprotected sucrose would be complex and tedious due to the presence of eight free hydroxyl groups, partially protected 2,1’:4,6-di-O-diisopropylidene sucrose was used as the starting material instead. However, similar reactivity between the remaining four hydroxyl groups still pose a challenge in the total synthesis of PSEs as the lack of selectivity can restrict customisation where acylation at specific OH is desired. To overcome this problem, a 4-step orthogonal protection scheme was developed. In this scheme, the remaining four hydroxyl groups on 2,1’:4,6-di-O-diisopropylidene sucrose, 6’-OH, 3’-OH, 4’-OH, and 3-OH, were protected with different protecting groups with an overall yield of > 40%. This orthogonally protected intermediate would provide a convenient and divergent access to a wider range of natural and synthetic PSEs as (substituted) cinnamoyl groups can be selectively introduced at desired positions. Using this scheme, three different series of monosubstituted PSEs were successfully synthesized where (substituted) cinnamoyl groups were introduced selectively at O-3, O-3’, and O-4’ positions, respectively. The expanded library of PSEs would aid in structural-activity relationship study of PSEs for identifying key components responsible for their biological activities.Keywords: orthogonal protection, phenylpropanoid sucrose esters, selectivity, sucrose
Procedia PDF Downloads 1581677 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics
Procedia PDF Downloads 1331676 Physical Property Characterization of Adult Dairy Nutritional Products for Powder Reconstitution
Authors: Wei Wang, Martin Chen
Abstract:
The reconstitution behaviours of nutritional products could impact user experience. Reconstitution issues such as lump formation and white flecks sticking to bottles surfaces could be very unappealing for the consumers in milk preparation. The controlling steps in dissolving instant milk powders include wetting, swelling, sinking, dispersing, and dissolution as in the literature. Each stage happens simultaneously with the others during milk preparation, and it is challenging to isolate and measure each step individually. This study characterized three adult nutritional products for different properties including particle size, density, dispersibility, stickiness, and capillary wetting to understand the relationship between powder physical properties and their reconstitution behaviours. From the results, the formation of clumps can be caused by different factors limiting the critical steps of powder reconstitution. It can be caused by small particle size distribution, light particle density limiting powder wetting, or the rapid swelling and dissolving of particle surface materials to impede water penetration in the capillary channels formed by powder agglomerates. For the grain or white flecks formation in milk preparation, it was believed to be controlled by dissolution speed of the particles after dispersion into water. By understanding those relationship between fundamental powder structure and their user experience in reconstitution, this information provides us new and multiple perspectives on how to improve the powder characteristics in the commercial manufacturing.Keywords: characterization, dairy nutritional powder, physical property, reconstitution
Procedia PDF Downloads 1031675 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method
Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption
Procedia PDF Downloads 5181674 An Exploratory Research of Human Character Analysis Based on Smart Watch Data: Distinguish the Drinking State from Normal State
Authors: Lu Zhao, Yanrong Kang, Lili Guo, Yuan Long, Guidong Xing
Abstract:
Smart watches, as a handy device with rich functionality, has become one of the most popular wearable devices all over the world. Among the various function, the most basic is health monitoring. The monitoring data can be provided as an effective evidence or a clue for the detection of crime cases. For instance, the step counting data can help to determine whether the watch wearer was quiet or moving during the given time period. There is, however, still quite few research on the analysis of human character based on these data. The purpose of this research is to analyze the health monitoring data to distinguish the drinking state from normal state. The analysis result may play a role in cases involving drinking, such as drunk driving. The experiment mainly focused on finding the figures of smart watch health monitoring data that change with drinking and figuring up the change scope. The chosen subjects are mostly in their 20s, each of whom had been wearing the same smart watch for a week. Each subject drank for several times during the week, and noted down the begin and end time point of the drinking. The researcher, then, extracted and analyzed the health monitoring data from the watch. According to the descriptive statistics analysis, it can be found that the heart rate change when drinking. The average heart rate is about 10% higher than normal, the coefficient of variation is less than about 30% of the normal state. Though more research is needed to be carried out, this experiment and analysis provide a thought of the application of the data from smart watches.Keywords: character analysis, descriptive statistics analysis, drink state, heart rate, smart watch
Procedia PDF Downloads 1671673 Near-Infrared Optogenetic Manipulation of a Channelrhodopsin via Upconverting Nanoparticles
Authors: Kanchan Yadav, Ai-Chuan Chou, Rajesh Kumar Ulaganathan, Hua-De Gao, Hsien-Ming Lee, Chien-Yuan Pan, Yit-Tsong Chen
Abstract:
Optogenetics is an innovative technology now widely adopted by researchers in different fields of the biological sciences. However, due to the weak tissue penetration capability of the short wavelengths used to activate light-sensitive proteins, an invasive light guide has been used in animal studies for photoexcitation of target tissues. Upconverting nanoparticles (UCNPs), which transform near-infrared (NIR) light to short-wavelength emissions, can help address this issue. To improve optogenetic performance, we enhance the target selectivity for optogenetic controls by specifically conjugating the UCNPs with light-sensitive proteins at a molecular level, which shortens the distance as well as enhances the efficiency of energy transfer. We tagged V5 and Lumio epitopes to the extracellular N-terminal of channelrhodopsin-2 with an mCherry conjugated at the intracellular C-terminal (VL-ChR2m) and then bound NeutrAvidin-functionalized UCNPs (NAv-UCNPs) to the VL-ChR2m via a biotinylated antibody against V5 (bV5-Ab). We observed an apparent energy transfer from the excited UCNP (donor) to the bound VL-ChR2m (receptor) by measuring emission-intensity changes at the donor-receptor complex. The successful patch-clamp electrophysiological test and an intracellular Ca2+ elevation observed in the designed UCNP-ChR2 system under optogenetic manipulation confirmed the practical employment of UCNP-assisted NIR-optogenetic functionality. This work represents a significant step toward improving therapeutic optogenetics.Keywords: Channelrhodopsin-2, near infrared, optogenetics, upconverting nanoparticles
Procedia PDF Downloads 2761672 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems
Authors: Juhi Faridi, Mohd. Ajmal Kafeel
Abstract:
The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS. Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.Keywords: analog circuits, digital circuits, memristors, neuromorphic computing systems
Procedia PDF Downloads 1741671 Heamatological and Biochemical Changes in Cockerels Fed Graded Levels of Wild Sunflower Leaf Meal
Authors: Siyanbola Mojisola Funmilayo, Amao Emmanuel Ayodele
Abstract:
The poultry industry in Nigeria has been played by a variety of problems, which include the search for feed ingredients that are not competed for by man. This has resulted in a reduced interest of farmers in the industry leading to a reduction in animal protein availability for human consumption as a consequence of a high cost of production. The incorporation of wild sunflower meal (Tithonia diversfolia, Hemsl A. Gray) (WSF Meal) and some others in poultry diets have been reported to result in compounded feed with nutrient profiles that compare favourable with feeds of conventional feedstuff and reduce feed cost as they reduce competition with humans. A 98-day feeding trial was used to evaluate the effect of Wild sunflower leaf (WSL) at varying levels on the hematology and biochemistry of cockerels. A total of one hundred and twenty(120) cockerel birds were randomly allotted into four experimental diets with three replicates per experimental diet (ten birds per replicate). Wild sunflower leaf was included in four graded levels ; 0, 5, 10, and 15%. Packed cell volume, Red blood cell count, White blood cell count, Hemoglobin count, Lymphocyte count, Neutrophil count, Platelets, Mean Corpuscular Hemoglobin Concentration (MCHC), Mean Corpuscular Hemoglobin (MCH), Aspartate aminotransferase (AST), Glucose, Urea, Chloride, Sodium, and Potassium ion values were significantly different (p<0.05) among the treatments. Mean values obtained for Creatinine, Total Protein, Alanine aminotransferase (ALT), Albumin, and Mean Corpuscular Volume (MCV) were not significantly different (p>0.05) in all the treatment. WSL could be included up to 15% in the diet of cockerel without any deleterious effect on the birds. Based on the results, up to 15% Wild sunflower meal (WSL) can be included in the diet of cockerel without any adverse effect on the hematology and biochemical indices of birds.Keywords: biochemical changes, cockerels, hematology, wild sunflower leaf
Procedia PDF Downloads 4471670 An Efficient Aptamer-Based Biosensor Developed via Irreversible Pi-Pi Functionalisation of Graphene/Zinc Oxide Nanocomposite
Authors: Sze Shin Low, Michelle T. T. Tan, Poi Sim Khiew, Hwei-San Loh
Abstract:
An efficient graphene/zinc oxide (PSE-G/ZnO) platform based on pi-pi stacking, non-covalent interactions for the development of aptamer-based biosensor was presented in this study. As a proof of concept, the DNA recognition capability of the as-developed PSE-G/ZnO enhanced aptamer-based biosensor was evaluated using Coconut Cadang-cadang viroid disease (CCCVd). The G/ZnO nanocomposite was synthesised via a simple, green and efficient approach. The pristine graphene was produced through a single step exfoliation of graphite in sonochemical alcohol-water treatment while the zinc nitrate hexahydrate was mixed with the graphene and subjected to low temperature hydrothermal growth. The developed facile, environmental friendly method provided safer synthesis procedure by eliminating the need of harsh reducing chemicals and high temperature. The as-prepared nanocomposite was characterised by X-ray diffractometry (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) to evaluate its crystallinity, morphology and purity. Electrochemical impedance spectroscopy (EIS) was employed for the detection of CCCVd sequence with the use of potassium ferricyanide (K3[Fe(CN)6]). Recognition of the RNA analytes was achieved via the significant increase in resistivity for the double stranded DNA, as compared to single-stranded DNA. The PSE-G/ZnO enhanced aptamer-based biosensor exhibited higher sensitivity than the bare biosensor, attributing to the synergistic effect of high electrical conductivity of graphene and good electroactive property of ZnO.Keywords: aptamer-based biosensor, graphene/zinc oxide nanocomposite, green synthesis, screen printed carbon electrode
Procedia PDF Downloads 3701669 Heuristic Spatial-Spectral Hyperspectral Image Segmentation Using Bands Quartile Box Plot Profiles
Authors: Mohamed A. Almoghalis, Osman M. Hegazy, Ibrahim F. Imam, Ali H. Elbastawessy
Abstract:
This paper presents a new hyperspectral image segmentation scheme with respect to both spatial and spectral contexts. The scheme uses the 8-pixels spatial pattern to build a weight structure that holds the number of outlier bands for each pixel among its neighborhood windows in different directions. The number of outlier bands for a pixel is obtained using bands quartile box plots profile among spatial 8-pixels pattern windows. The quartile box plot weight structure represents the spatial-spectral context in the image. Instead of starting segmentation process by single pixels, the proposed methodology starts by pixels groups that proved to share the same spectral features with respect to their spatial context. As a result, the segmentation scheme starts with Jigsaw pieces that build a mosaic image. The following step builds a model for each Jigsaw piece in the mosaic image. Each Jigsaw piece will be merged with another Jigsaw piece using KNN applied to their bands' quartile box plots profiles. The scheme iterates till required number of segments reached. Experiments use two data sets obtained from Earth Observer 1 (EO-1) sensor for Egypt and France. Initial results qualitative analysis showed encouraging results compared with ground truth. Quantitative analysis for the results will be included in the final paper.Keywords: hyperspectral image segmentation, image processing, remote sensing, box plot
Procedia PDF Downloads 6051668 Evaluation of Anti-Typhoid Effects of Azadirachta indica L. Fractions
Authors: A. Adetutu, T. M. Awodugba, O. A. Owoade
Abstract:
The development of resistance to currently known conventional anti-typhoid drugs has necessitated search into cheap, more potent and less toxic anti-typhoid drugs of plant origin. Therefore, this study investigated the anti-typhoid activity of fractions of A. indica in Salmonella typhi infected rats. Leaves of A. indica were extracted in methanol and fractionated into n-hexane, chloroform, ethyl-acetate, and aqueous fractions. The anti-salmonella potentials of fractions of A. indica were assessed via in-vitro inhibition of S. typhi using agar well diffusion, Minimum Inhibitory Concentration (MIC), Minimum Bactericidal Concentration (MBC) and biofilm assays. The biochemical and haematological parameters were determined by spectrophotometric methods. The histological analysis was performed using Haematoxylin and Eosin staining methods. Data analysis was performed by one-way ANOVA. Results of this study showed that S. typhi was sensitive to aqueous and chloroform fractions of A. indica, and the fractions showed biofilm inhibition at concentrations of 12.50, 1.562, and 0.39 mg/mL. In the in-vivo study, the extract and chloroform fraction had significant (p < 0.05) effects on the number of viable S. typhi recovered from the blood and stopped salmonellosis after 6 days of treatment of rats at 500 mg/kg b.w. Treatments of infected rats with chloroform and aqueous fractions of A. indica normalized the haematological parameters in the animals. Similarly, treatment with fractions of the plants sustained a normal antioxidant status when compared with the normal control group. Chloroform and ethyl-acetate fractions of A. indica reversed the liver and intestinal degeneration induced by S. typhi infection in rats. The present investigation indicated that the aqueous and chloroform fractions of A. indica showed the potential to provide an effective treatment for salmonellosis, including typhoid fever. The results of the study may justify the ethno-medicinal use of the extract in traditional medicine for the treatment of typhoid and salmonella infections.Keywords: Azadirachta indica L, salmonella, typhoid, leave fractions
Procedia PDF Downloads 1321667 Environmental and Safety Studies for Advanced Fuel Cycle Fusion Energy Systems: The ESSENTIAL Approach
Authors: Massimo Zucchetti
Abstract:
In the US, the SPARC-ARC projects of compact tokamaks are being developed: both are aimed at the technological demonstration of fusion power reactors with cutting-edge technology but following different design approaches. However, they show more similarities than differences in the fuel cycle, safety, radiation protection, environmental, waste and decommissioning aspects: all reactors, either experimental or demonstration ones, have to fulfill certain "essential" requirements to pass from virtual to real machines, to be built in the real world. The paper will discuss these "essential" requirements. Some of the relevant activities in these fields, carried out by our research group (ESSENTIAL group), will be briefly reported, with the aim of showing some methodology aspects that have been developed and might be of wider interest. Also, a non-competitive comparison between our results for different projects will be included when useful. The question of advanced D-He3 fuel cycles to be used for those machines will be addressed briefly. In the past, the IGNITOR project of a compact high-magnetic field D-T ignition experiment was found to be able to sustain limited D-He3 plasmas, while the Candor project was a more decisive step toward D-He3 fusion reactors. The following topics will be treated: Waste management and radioactive safety studies for advanced fusion power plants; development of compact high-field advanced fusion reactors; behavior of nuclear materials under irradiation: neutron-induced radioactivity due to side DT reactions, radiation damage; accident analysis; reactor siting.Keywords: advanced fuel fusion reactors, deuterium-helium3, high-field tokamaks, fusion safety
Procedia PDF Downloads 821666 Research on Hangzhou Commercial Center System Based on Point of Interest Data
Authors: Chen Wang, Qiuxiao Chen
Abstract:
With the advent of the information age and the era of big data, urban planning research is no longer satisfied with the analysis and application of traditional data. Because of the limitations of traditional urban commercial center system research, big data provides new opportunities for urban research. Therefore, based on the quantitative evaluation method of big data, the commercial center system of the main city of Hangzhou is analyzed and evaluated, and the scale and hierarchical structure characteristics of the urban commercial center system are studied. In order to make up for the shortcomings of the existing POI extraction method, it proposes a POI extraction method based on adaptive adjustment of search window, which can accurately and efficiently extract the POI data of commercial business in the main city of Hangzhou. Through the visualization and nuclear density analysis of the extracted Point of Interest (POI) data, the current situation of the commercial center system in the main city of Hangzhou is evaluated. Then it compares with the commercial center system structure of 'Hangzhou City Master Plan (2001-2020)', analyzes the problems existing in the planned urban commercial center system, and provides corresponding suggestions and optimization strategy for the optimization of the planning of Hangzhou commercial center system. Then get the following conclusions: The status quo of the commercial center system in the main city of Hangzhou presents a first-level main center, a two-level main center, three third-level sub-centers, and multiple community-level business centers. Generally speaking, the construction of the main center in the commercial center system is basically up to standard, and there is still a big gap in the construction of the sub-center and the regional-level commercial center, further construction is needed. Therefore, it proposes an optimized hierarchical functional system, organizes commercial centers in an orderly manner; strengthens the central radiation to drive surrounding areas; implements the construction guidance of the center, effectively promotes the development of group formation and further improves the commercial center system structure of the main city of Hangzhou.Keywords: business center system, business format, main city of Hangzhou, POI extraction method
Procedia PDF Downloads 1401665 Extraction of Dyes Using an Aqueous Two-Phase System in Stratified and Slug Flow Regimes of a Microchannel
Authors: Garima, S. Pushpavanam
Abstract:
In this work, analysis of an Aqueous two-phase (polymer-salt) system for extraction of sunset yellow dye is carried out. A polymer-salt ATPS i.e.; Polyethylene glycol-600 and anhydrous sodium sulfate is used for the extraction. Conditions are chosen to ensure that the extraction results in a concentration of the dye in one of the phases. The dye has a propensity to come to the Polyethylene glycol-600 phase. This extracted sunset yellow dye is degraded photo catalytically into less harmful components. The cloud point method was used to obtain the binodal curve of ATPS. From the binodal curve, the composition of salt and Polyethylene glycol -600 was chosen such that the volume of Polyethylene glycol-600 rich phase is low. This was selected to concentrate the dye from a dilute solution in a large volume of contaminated solution into a small volume. This pre-concentration step provides a high reaction rate for photo catalytic degradation reaction. Experimentally the dye is extracted from the salt phase to Polyethylene glycol -600 phase in batch extraction. This was found to be very fast and all dye was extracted. The concentration of sunset yellow dye in salt and polymer phase is measured at 482nm by ultraviolet-visible spectrophotometry. The extraction experiment in micro channels under stratified flow is analyzed to determine factors which affect the dye extraction. Focus will be on obtaining slug flow by adding nanoparticles in micro channel. The primary aim is to exploit the fact that slug flow will help improve mass transfer rate from one phase to another through internal circulation in dispersed phase induced by shear.Keywords: aqueous two phase system, binodal curve, extraction, sunset yellow dye
Procedia PDF Downloads 3581664 TimeTune: Personalized Study Plans Generation with Google Calendar Integration
Authors: Chevon Fernando, Banuka Athuraliya
Abstract:
The purpose of this research is to provide a solution to the students’ time management, which usually becomes an issue because students must study and manage their personal commitments. "TimeTune," an AI-based study planner that provides an opportunity to maneuver study timeframes by incorporating modern machine learning algorithms with calendar applications, is unveiled as the ideal solution. The research is focused on the development of LSTM models that connect to the Google Calendar API in the process of developing learning paths that would be fit for a unique student's daily life experience and study history. A key finding of this research is the success in building the LSTM model to predict optimal study times, which, integrating with the real-time data of Google Calendar, will generate the timetables automatically in a personalized and customized manner. The methodology encompasses Agile development practices and Object-Oriented Analysis and Design (OOAD) principles, focusing on user-centric design and iterative development. By adopting this method, students can significantly reduce the tension associated with poor study habits and time management. In conclusion, "TimeTune" displays an advanced step in personalized education technology. The fact that its application of ML algorithms and calendar integration is quite innovative is slowly and steadily revolutionizing the lives of students. The excellence of maintaining a balanced academic and personal life is stress reduction, which the applications promise to provide for students when it comes to managing their studies.Keywords: personalized learning, study planner, time management, calendar integration
Procedia PDF Downloads 491663 Comparing Hotels' Official Websites with Their Pages on Booking Sites: An Exploratory Study
Authors: Iman Shawky
Abstract:
Hotel websites frequently aim at encouraging visitors to become potential guests by completing their booking procedures, and accordingly, they have been proved to be attractive and appealing. That might be due to the consideration of them as one of the direct efficacious tools to promote and sell hotels' facilities, besides building strong communication with guests to create unforgettable brand images. This study tried to find out a step for five-star and four-star hotels to develop their websites to meet their visitors' or guests' requirements for an effective site. In addition, it aimed at exploring to what extent hotels' official websites compared with their pages on hotel booking sites still influence visitors' or guests' desires to book. Besides, it also aimed at investigating to what extent visitors or guests widely trust and use those sites to accomplish their booking. Furthermore, it tried to explore to what extent visitors' or guests' preferences of those sites can influence on hotels' financial performance. To achieve these objectives, the researcher conducted an exploratory study by surfing both hotels' official websites and their pages on booking sites of such hotels in Alexandria city in Egypt to make a comparison between them. Moreover, another separate comparison was made on Arab and foreign guests' views conducted by using a questionnaire during the past seven months to investigate the effectiveness of hotels' official websites against their pages on booking sites to trust and motive them to book. The results indicated that hotels' pages on booking sites represented widely trusted and used sites compared with their official websites for achieving visitors' or guests' booking process, while a few other visitors or guests still trusted official hotel websites for completing their booking.Keywords: five-star and four-star hotels, hotel booking sites, hotels' financial performance, hotels' official websites
Procedia PDF Downloads 1411662 Simulation of Cure Kinetics and Process-Induced Stresses in Carbon Fibre Composite Laminate Manufactured by a Liquid Composite Molding Technique
Authors: Jayaraman Muniyappan, Bachchan Kr Mishra, Gautam Salkar, Swetha Manian Sridhar
Abstract:
Vacuum Assisted Resin Transfer Molding (VARTM), a cost effective method of Liquid Composite Molding (LCM), is a single step process where the resin, at atmospheric pressure, is infused through a preform that is maintained under vacuum. This hydrodynamic pressure gradient is responsible for the flow of resin through the dry fabric preform. The current study has a slight variation to traditional VARTM, wherein, the resin infuses through the fabric placed on a heated mold to reduce its viscosity. The saturated preform is subjected to a cure cycle where the resin hardens as it undergoes curing. During this cycle, an uneven temperature distribution through the thickness of the composite and excess exothermic heat released due to different cure rates result in non-uniform curing. Additionally, there is a difference in thermal expansion coefficient between fiber and resin in a given plane and between adjacent plies. All these effects coupled with orthotropic coefficient of thermal expansion of the composite give rise to process-induced stresses in the laminate. Such stresses lead to part deformation when the laminate tries to relieve them as the part is released off the mold. The current study looks at simulating resin infusion, cure kinetics and the structural response of composite laminate subject to process-induced stresses.Keywords: cure kinetics, process-induced stresses, thermal expansion coefficient, vacuum assisted resin transfer molding
Procedia PDF Downloads 2401661 Towards a Methodology for the Assessment of Neighbourhood Design for Happiness
Authors: Tina Pujara
Abstract:
Urban and regional research in the new emerging inter-disciplinary field of happiness is seemingly limited. However, it is progressively being recognized that there is enormous potential for social and behavioral scientists to add a spatial dimension to it. In fact, the happiness of communities can be notably influenced by the design and maintenance of the neighborhoods they inhabit. The probable key reasons being that places can facilitate human social connections and relationships. While it is increasingly being acknowledged that some neighborhood designs appear better suited for social connectedness than others, the plausible reasons for places to deter these characteristics and perhaps their influence on happiness are outwardly unknown. In addition, an explicit step wise methodology to assess neighborhood designs for happiness (of their communities) is not known to exist. This paper is an attempt towards developing such a methodological framework. The paper presents the development of a methodological framework for assessing neighborhood designs for happiness, with a particular focus on the outdoor shared spaces in neighborhoods. The developed methodological framework of investigation follows a mixed method approach and draws upon four different sources of information. The framework proposes an empirical examination of the contribution of neighborhood factors, particularly outdoor shared spaces, to individual happiness. One of the main tools proposed for this empirical examination is Jan Gehl’s Public Space Public Life (PSPL) Survey. The developed framework, as presented in the paper, is a contribution towards the development of a consolidated methodology for assessing neighborhood designs for happiness, which can further serve as a unique tool to inform urban designers, architects and other decision makers.Keywords: happiness, methodology, neighbourhood design, outdoor shared spaces
Procedia PDF Downloads 1631660 Exploring the Interplay of Attention, Awareness, and Control: A Comprehensive Investigation
Authors: Venkateswar Pujari
Abstract:
This study tries to investigate the complex interplay between control, awareness, and attention in human cognitive processes. The fundamental elements of cognitive functioning that play a significant role in influencing perception, decision-making, and behavior are attention, awareness, and control. Understanding how they interact can help us better understand how our minds work and may even increase our understanding of cognitive science and its therapeutic applications. The study uses an empirical methodology to examine the relationships between attention, awareness, and control by integrating different experimental paradigms and neuropsychological tests. To ensure the generalizability of findings, a wide sample of participants is chosen, including people with various cognitive profiles and ages. The study is structured into four primary parts, each of which focuses on one component of how attention, awareness, and control interact: 1. Evaluation of Attentional Capacity and Selectivity: In this stage, participants complete established attention tests, including the Stroop task and visual search tasks. 2. Evaluation of Awareness Degrees: In the second stage, participants' degrees of conscious and unconscious awareness are assessed using perceptual awareness tasks such as masked priming and binocular rivalry tasks. 3. Investigation of Cognitive Control Mechanisms: In the third phase, reaction inhibition, cognitive flexibility, and working memory capacity are investigated using exercises like the Wisconsin Card Sorting Test and the Go/No-Go paradigm. 4. Results Integration and Analysis: Data from all phases are integrated and analyzed in the final phase. To investigate potential links and prediction correlations between attention, awareness, and control, correlational and regression analyses are carried out. The study's conclusions shed light on the intricate relationships that exist between control, awareness, and attention throughout cognitive function. The findings may have consequences for cognitive psychology, neuroscience, and clinical psychology by providing new understandings of cognitive dysfunctions linked to deficiencies in attention, awareness, and control systems.Keywords: attention, awareness, control, cognitive functioning, neuropsychological assessment
Procedia PDF Downloads 911659 Green Accounting and Firm Performance: A Bibliometric Literature Review
Authors: Francesca di Donato, Sara Trucco
Abstract:
Green accounting is a growing topic of interest. Indeed, nowadays, most firms affect the environment; therefore, companies are seeking the best way to disclose environmental information. Furthermore, companies are increasingly committed to improving the environment, and the topic is gaining more importance to the public, governments, and policymakers. Green accounting is a type of accounting that considers environmental costs and their impact on the financial performance of firms. Thus, the motivation of the current research is to investigate the state-of-the-art literature on the relationship between green accounting and firm performance since the birth of the topic of green accounting and to investigate gaps in the literature that represent fruitful terrain for future research. In doing so, this study provides a bibliometric literature review of existing evidence related to the link between green accounting and firm performance since 2000. The search, based on the most relevant databases for scientific journals (which are Scopus, Emerald, Web of Science, Google Scholar, and Econlit), returned 1917 scientific articles. The articles were manually reviewed in order to identify only the relevant studies in the field by excluding articles with titles and abstracts out of scope. The final sample was composed of 107 articles. A content analysis was carried out on the final sample of articles; in doing so, a classification system has been proposed. Findings show the most relevant environmental costs and issues considered in previous studies and how green accounting may be linked to the financial and non-financial performance of a firm. The study also offers suggestions for future research in this domain. This study has several practical implications. Indeed, the topic of green accounting may be applied to different sectors and different types of companies. Therefore, this study may help managers to better understand the most relevant environmental information to disclose and how environmental issues may be managed to improve the performance of the firms. Moreover, the bibliometric literature review may be of interest to those stakeholders who are interested in the historical evolution of the topic.Keywords: bibliometric literature review, firm performance, green accounting, literature review
Procedia PDF Downloads 691658 The Determinants of Enterprise Risk Management: Literature Review, and Future Research
Authors: Sylvester S. Horvey, Jones Mensah
Abstract:
The growing complexities and dynamics in the business environment have led to a new approach to risk management, known as enterprise risk management (ERM). ERM is a system and an approach to managing the risks of an organization in an integrated manner to achieve the corporate goals and strategic objectives. Regardless of the diversities in the business environment, ERM has become an essential factor in managing individual and business risks because ERM is believed to enhance shareholder value and firm growth. Despite the growing number of literature on ERM, the question about what factors drives ERM remains limited. This study provides a comprehensive literature review of the main factors that contribute to ERM implementation. Google Scholar was the leading search engine used to identify empirical literature, and the review spanned between 2000 and 2020. Articles published in Scimago journal ranking and Scopus were examined. Thirteen firm characteristics and sixteen articles were considered for the empirical review. Most empirical studies agreed that firm size, institutional ownership, industry type, auditor type, industrial diversification, earnings volatility, stock price volatility, and internal auditor had a positive relationship with ERM adoption, whereas firm size, institutional ownership, auditor type, and type of industry were mostly seen be statistically significant. Other factors such as financial leverage, profitability, asset opacity, international diversification, and firm complexity revealed an inconclusive result. The growing literature on ERM is not without limitations; hence, this study suggests that further research should examine ERM determinants within a new geographical context while considering a new and robust way of measuring ERM rather than relying on a simple proxy (dummy) for ERM measurement. Other firm characteristics such as organizational culture and context, corporate scandals and losses, and governance could be considered determinants of ERM adoption.Keywords: enterprise risk management, determinants, ERM adoption, literature review
Procedia PDF Downloads 1731657 Focus-Latent Dirichlet Allocation for Aspect-Level Opinion Mining
Authors: Mohsen Farhadloo, Majid Farhadloo
Abstract:
Aspect-level opinion mining that aims at discovering aspects (aspect identification) and their corresponding ratings (sentiment identification) from customer reviews have increasingly attracted attention of researchers and practitioners as it provides valuable insights about products/services from customer's points of view. Instead of addressing aspect identification and sentiment identification in two separate steps, it is possible to simultaneously identify both aspects and sentiments. In recent years many graphical models based on Latent Dirichlet Allocation (LDA) have been proposed to solve both aspect and sentiment identifications in a single step. Although LDA models have been effective tools for the statistical analysis of document collections, they also have shortcomings in addressing some unique characteristics of opinion mining. Our goal in this paper is to address one of the limitations of topic models to date; that is, they fail to directly model the associations among topics. Indeed in many text corpora, it is natural to expect that subsets of the latent topics have higher probabilities. We propose a probabilistic graphical model called focus-LDA, to better capture the associations among topics when applied to aspect-level opinion mining. Our experiments on real-life data sets demonstrate the improved effectiveness of the focus-LDA model in terms of the accuracy of the predictive distributions over held out documents. Furthermore, we demonstrate qualitatively that the focus-LDA topic model provides a natural way of visualizing and exploring unstructured collection of textual data.Keywords: aspect-level opinion mining, document modeling, Latent Dirichlet Allocation, LDA, sentiment analysis
Procedia PDF Downloads 941656 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis
Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar
Abstract:
Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.Keywords: NLP, multilingual, sentiment analysis, texts
Procedia PDF Downloads 1051655 Preparation and Flame-Retardant Properties of Epoxy Resins Containing Organophosphorus Compounds
Authors: Tachita Vlad-Bubulac, Ionela-Daniela Carja, Diana Serbezeanu, Corneliu Hamciuc, Vicente Javier Forrat Perez
Abstract:
The present work describes the preparation of new organophosphorus compounds with high content of phosphorus followed by the incorporation of these compounds into epoxy resin systems in order to investigate the phosphorus effect in terms of thermal stability, flame-retardant and mechanical properties of modified epoxy resins. Thus, two new organophosphorus compounds have been synthesized and fully characterized. 6-Oxido-6H-dibenz[c,e][1,2]oxaphosphorinyl-phenylcarbinol has been prepared by the addition reaction of P–H group of 9,10-dihydro-9-oxa-10-phosphaphenanthrene-10-oxide to carbonyl group of benzaldehyde. By treating the phenylcarbinol derivative with POCl3 a new phosphorus compound was obtained, having a content of 12.227% P. The organophosphorus compounds have been purified by recrystallization while their chemical structures have been confirmed by melting point measurements, FTIR and HNMR spectroscopies. In the next step various flame-retardant epoxy resins with different content of phosphorus have been prepared starting from a commercial epoxy resin and using dicyandiamide (DICY) as a latent curing agent in the presence of an accelerator. Differential scanning calorimetry (DSC) has been applied to investigate the behavior and kinetics of curing process of thermosetting systems. The results showed that the best curing characteristic and glass transition temperature are obtained at a ratio of epoxy resin: DICY: accelerator equal to 94:5:1. The thermal stability of the phosphorus-containing epoxy resins was investigated by thermogravimetric analysis in nitrogen and air, DSC, SEM and LOI test measurements.Keywords: epoxy resins, flame retardant properties, phosphorus-containing compounds, thermal stability
Procedia PDF Downloads 3131654 Setting up Model Hospitals in Health Care Waste Management in Madagascar
Authors: Sandrine Andriantsimietry, Hantanirina Ravaosendrasoa
Abstract:
Madagascar, in 2018, set up the first best available technology, autoclave, to treat the health care waste in public hospitals according the best environmental practices in health care waste management. Incineration of health care waste, frequently through open burning is the most common practice of treatment and elimination of health care waste across the country. Autoclave is a best available technology for non-incineration of health care waste that permits recycling of treated waste and prevents harm in environment through the reduction of unintended persistent organic pollutants from the health sector. A Global Environment Fund project supported the introduction of the non-incineration treatment of health care waste to help countries in Africa to move towards Stockholm Convention objectives in the health sector. Two teaching hospitals in Antananarivo and one district hospital in Manjakandriana were equipped respectively with 1300L, 250L and 80L autoclaves. The capacity of these model hospitals was strengthened by the donation of equipment and materials and the training of the health workers in best environmental practices in health care waste management. Proper segregation of waste in the wards to collect the infectious waste that was treated in the autoclave was the main step guaranteeing a cost-efficient non-incineration of health care waste. Therefore, the start-up of the switch of incineration into non-incineration treatment was carried out progressively in each ward with close supervision of hygienist. Emissions avoided of unintended persistent organic pollutants during these four months of autoclaves use is 9.4 g Toxic Equivalent per year. Public hospitals in low income countries can be model in best environmental practices in health care waste management but efforts must be made internally for sustainment.Keywords: autoclave, health care waste management, model hospitals, non-incineration
Procedia PDF Downloads 1631653 Railway Transport as a Potential Source of Polychlorinated Biphenyls in Soil
Authors: Nataša Stojić, Mira Pucarević, Nebojša Ralević, Vojislava Bursić, Gordan Stojić
Abstract:
Surface soil (0 – 10 cm) samples from 52 sampling sites along the length of railway tracks on the territory of Srem (the western part of the Autonomous Province of Vojvodina, itself part of Serbia) were collected and analyzed for 7 polychlorinated biphenyls (PCBs) in order to see how the distance from the railroad on the one hand and dump on the other hand, affect the concentration of PCBs (CPCBs) in the soil. Samples were taken at a distance of 0.03 to 4.19 km from the railway and 0.43 to 3.35 km from the landfills. For the soil extraction the Soxhlet extraction (USEPA 3540S) was used. The extracts were purified on a silica-gel column (USEPA 3630C). The analysis of the extracts was performed by gas chromatography with tandem mass spectrometry. PCBs were not detected only at two locations. Mean total concentration of PCBs for all other sampling locations was 0,0043 ppm dry weight (dw) with a range of 0,0005 to 0,0227 ppm dw. On the part of the data that were interesting for this research with statistical methods (PCA) were isolated factors that affect the concentration of PCBs. Data were also analyzed using the Pearson's chi-squared test which showed that the hypothesis of independence of CPCBs and distance from the railway can be rejected. Hypothesis of independence between CPCB and the percentage of humus in the soil can also be rejected, in contrast to dependence of CPCB and the distance from the landfill where the hypothesis of independence cannot be rejected. Based on these results can be said that railway transport is a potential source of PCBs. The next step in this research is to establish the position of transformers which are located near sampling sites as another important factor that affects the concentration of PCBs in the soil.Keywords: GC/MS, landfill, PCB, railway, soil
Procedia PDF Downloads 335