Search results for: Optimization effort
996 Battery Grading Algorithm in 2nd-Life Repurposing LI-Ion Battery System
Authors: Ya L. V., Benjamin Ong Wei Lin, Wanli Niu, Benjamin Seah Chin Tat
Abstract:
This article introduces a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as an energy storage system (ESS). Most of the 2nd-life retired battery systems in the market have module/pack-level state-of-health (SOH) indicator, which is utilized for guiding appropriate depth-of-discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end-of-life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.Keywords: battery grading algorithm, 2nd-life repurposing battery system, semi-analytic methodology, reliability and cyclability
Procedia PDF Downloads 203995 An Insight into the Distribution of Lineaments over Sheared Terrains to Hydraulically Characterize the Shear Zones in Precambrian Hard Rock Aquifer System
Authors: Tamal Sur, Tapas Acharya
Abstract:
Identifying the water resource in hard crystalline rock terrain has been a huge challenge over the decades as it is considered a poor groundwater province area. Over the years, usage of satellite imagery for the delineation of groundwater potential zone in sheared hard rock terrain has been occasionally successful. In numerous circumstances, it has been observed that groundwater potential zone delineated by satellite imagery study has failed to yield satisfactory result on its own. The present study discusses the fact that zones having high concentration of lineaments oblique to the general trend of shear fabric could be good groundwater potential zones within a shear zone in crystalline fractured rock aquifer system. Due to this fact, the density of lineaments and the number of intersecting lineaments increases over that particular region, making it a suitable locale for good groundwater recharge, which is mostly composed of Precambrian metamorphic rocks i.e., quartzite, granite gneisses, porphyroclastic granite-gneiss, quartzo-feldspathic-granite-gneiss, mylonitic granites, quartz-biotite-granite gneiss and some phyllites of Purulia district of West Bengal, NE India. This study aims to construct an attempt to demonstrate the relationship of high amount of lineament accumulation and their intersection with high groundwater fluctuation zones i.e., good groundwater potential zones. On the basis of that, an effort has been made to characterize the shear zones with respect to their groundwater potentiality. Satellite imagery data (IRS-P6 LISS IV standard FCC image) analysis reveals the bifurcating nature of North Purulia shear zone (NPSZ) and South Purulia shear zone (SPSZ) over the study area. Careful analysis of lineament rose diagrams, lineament density map, lineament intersection density map, and frequency diagrams for water table depths with an emphasis on high water table fluctuations exhibit the fact that different structural features existing over North and South Purulia shear zones can affect the nature of hydraulic potential of that region.Keywords: crystalline hard rock terrain, groundwater recharge, hydrogeology, lineaments, shear zone, water table fluctuation
Procedia PDF Downloads 77994 Effect of Injection Moulding Process Parameter on Tensile Strength of Using Taguchi Method
Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma
Abstract:
The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. So to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Here Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.Keywords: injection moulding, tensile strength, poly-propylene, Taguchi
Procedia PDF Downloads 288993 Teaching Accounting through Critical Accounting Research: The Origin and Its Relevance to the South African Curriculum
Authors: Rosy Makeresemese Qhosola
Abstract:
South Africa has maintained the effort to uphold its guiding principles in terms of its constitution. The constitution upholds principles such as equity, social justice, peace, freedom and hope, to mention but a few. So, such principles are made to form the basis for any legislation and policies that are in place to guide all fields/departments of government. Education is one of those departments or fields and is expected to abide by such principles as outlined in their policies. Therefore, as expected education policies and legislation outline their intentions to ensure the development of students’ clear critical thinking capacity as well as their creative capacities by creating learning contexts and opportunities that accommodate the effective teaching and learning strategies, that are learner centered and are compatible with the prescripts of a democratic constitution of the country. The paper aims at exploring and analyzing the progress of conventional accounting in terms of its adherence to the effective use of principles of good teaching, as per policy expectations in South Africa. The progress is traced by comparing conventional accounting to Critical Accounting Research (CAR), where the history of accounting as intended in the curriculum of SA and CAR are highlighted. Critical Accounting Research framework is used as a lens and mode of teaching in this paper, since it can create a space for the learning of accounting that is optimal marked by the use of more learner-centred methods of teaching. The Curriculum of South Africa also emphasises the use of more learner-centred methods of teaching that encourage an active and critical approach to learning, rather than rote and uncritical learning of given truths. The study seeks to maintain that conventional accounting is in contrast with principles of good teaching as per South African policy expectations. The paper further maintains that, the possible move beyond it and the adherence to the effective use of good teaching, could be when CAR forms the basis of teaching. Data is generated through Participatory Action Research where the meetings, dialogues and discussions with the focused groups are conducted, which consists of lecturers, students, subject heads, coordinators and NGO’s as well as departmental officials. The results are analysed through Critical Discourse Analysis since it allows for the use of text by participants. The study concludes that any teacher who aspires to achieve in the teaching and learning of accounting should first meet the minimum requirements as stated in the NQF level 4, which forms the basic principles of good teaching and are in line with Critical Accounting Research.Keywords: critical accounting research, critical discourse analysis, participatory action research, principles of good teaching
Procedia PDF Downloads 309992 Bovine Sperm Capacitation Promoters: The Comparison between Serum and Non-serum Albumin originated from Fish
Authors: Haris Setiawan, Phongsakorn Chuammitri, Korawan Sringarm, Montira Intanon, Anucha Sathanawongs
Abstract:
Capacitation is a prerequisite to achieving sperm competency to penetrate the oocyte naturally occurring in vivo throughout the female reproductive tract and entangling secretory fluid and epithelial cells. One of the crucial compounds in the oviductal fluid which promotes capacitation is albumin, secreted in major concentrations. However, the difficulties in the collection and the inconsistency of the oviductal fluid composition throughout the estrous cycle have replaced its function with serum-based albumins such as bovine serum albumin (BSA). BSA has been primarily involved and evidenced for their stabilizing effect to maintain the acrosome intact during the capacitation process, modulate hyperactivation, and elevate the number of sperm bound to zona pellucida. Contrary to its benefits, the use of blood-derived products in the culture system is not sustainable and increases the risk of disease transmissions, such as Creutzfeldt-Jakob disease (CJD) and bovine spongiform encephalopathy (BSE). Moreover, it has been asserted that this substance is an aeroallergen that produces allergies and respiratory problems. In an effort to identify an alternative sustainable and non-toxic albumin source, the present work evaluated sperm reactions to a capacitation medium containing albumin derived from the flesh of the snakehead fish (Channa striata). Before examining the ability of this non-serum albumin to promote capacitation in bovine sperm, the presence of albumin was detected using bromocresol purple (BCP) at the level of 25% from snakehead fish extract. Following the SDS-PAGE and densitometric analysis, two major bands at 40 kDa and 47 kDa consisting of 57% and 16% of total protein loaded were detected as the potential albumin-related bands. Significant differences were observed in all kinematic parameters upon incubation in the capacitation medium. Moreover, consistently higher values were shown for the kinematic parameters related to hyperactivation, such as amplitude lateral head (ALH), velocity curve linear (VCL), and linearity (LIN) when sperm were treated with 3 mg/mL of snakehead fish albumin among other treatments. Likewise, substantial differences of higher acrosome intact presented in sperm upon incubation with various concentrations of snakehead fish albumin for 90 minutes, indicating that this level of snakehead fish albumin can be used to replace the bovine serum albumin. However, further study is highly required to purify the albumin from snakehead fish extract for more reliable findings.Keywords: capacitation promoter, snakehead fish, non-serum albumin, bovine sperm
Procedia PDF Downloads 112991 Surfactant-Assisted Aqueous Extraction of Residual Oil from Palm-Pressed Mesocarp Fibre
Authors: Rabitah Zakaria, Chan M. Luan, Nor Hakimah Ramly
Abstract:
The extraction of vegetable oil using aqueous extraction process assisted by ionic extended surfactant has been investigated as an alternative to hexane extraction. However, the ionic extended surfactant has not been commercialised and its safety with respect to food processing is uncertain. Hence, food-grade non-ionic surfactants (Tween 20, Span 20, and Span 80) were proposed for the extraction of residual oil from palm-pressed mesocarp fibre. Palm-pressed mesocarp fibre contains a significant amount of residual oil ( 5-10 wt %) and its recovery is beneficial as the oil contains much higher content of vitamin E, carotenoids, and sterols compared to crude palm oil. In this study, the formulation of food-grade surfactants using a combination of high hydrophilic-lipophilic balance (HLB) surfactants and low HLB surfactants to produce micro-emulsion with very low interfacial tension (IFT) was investigated. The suitable surfactant formulation was used in the oil extraction process and the efficiency of the extraction was correlated with the IFT, droplet size and viscosity. It was found that a ternary surfactant mixture with a HLB value of 15 (82% Tween 20, 12% Span 20 and 6% Span 80) was able to produce micro-emulsion with very low IFT compared to other HLB combinations. Results suggested that the IFT and droplet size highly affect the oil recovery efficiency. Finally, optimization of the operating parameters shows that the highest extraction efficiency of 78% was achieved at 1:31 solid to liquid ratio, 2 wt % surfactant solution, temperature of 50˚C, and 50 minutes contact time.Keywords: food-grade surfactants, aqueous extraction of residual oil, palm-pressed mesocarp fibre, interfacial tension
Procedia PDF Downloads 390990 Research on Spatial Distribution of Service Facilities Based on Innovation Function: A Case Study of Zhejiang University Zijin Co-Maker Town
Authors: Zhang Yuqi
Abstract:
Service facilities are the boosters for the cultivation and development of innovative functions in innovative cluster areas. At the same time, reasonable service facilities planning can better link the internal functional blocks. This paper takes Zhejiang University Zijin Co-Maker Town as the research object, based on the combination of network data mining and field research and verification, combined with the needs of its internal innovative groups. It studies the distribution characteristics and existing problems of service facilities and then proposes a targeted planning suggestion. The main conclusions are as follows: (1) From the perspective of view, the town is rich in general life-supporting services, but lacking of provision targeted and distinctive service facilities for innovative groups; (2) From the perspective of scale structure, small-scale street shops are the main business form, lack of large-scale service center; (3) From the perspective of spatial structure, service facilities layout of each functional block is too fragile to fit the characteristics of 2aggregation- distribution' of innovation and entrepreneurial activities; (4) The goal of optimizing service facilities planning should be guided for fostering function of innovation and entrepreneurship and meet the actual needs of the innovation and entrepreneurial groups.Keywords: the cultivation of innovative function, Zhejiang University Zijin Co-Maker Town, service facilities, network data mining, space optimization advice
Procedia PDF Downloads 116989 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations
Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar
Abstract:
Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.Keywords: cache behaviour, network-on-chip, performance profiling, vectorization
Procedia PDF Downloads 197988 Optimization of the Drinking Water Treatment Process Improvement of the Treated Water Quality by Using the Sludge Produced by the Water Treatment Plant
Authors: M. Derraz, M. Farhaoui
Abstract:
Problem statement: In the water treatment processes, the coagulation and flocculation processes produce sludge according to the level of the water turbidity. The aluminum sulfate is the most common coagulant used in water treatment plants of Morocco as well as many countries. It is difficult to manage Sludge produced by the treatment plant. However, it can be used in the process to improve the quality of the treated water and reduce the aluminum sulfate dose. Approach: In this study, the effectiveness of sludge was evaluated at different turbidity levels (low, medium, and high turbidity) and coagulant dosage to find optimal operational conditions. The influence of settling time was also studied. A set of jar test experiments was conducted to find the sludge and aluminum sulfate dosages in order to improve the produced water quality for different turbidity levels. Results: Results demonstrated that using sludge produced by the treatment plant can improve the quality of the produced water and reduce the aluminum sulfate using. The aluminum sulfate dosage can be reduced from 40 to 50% according to the turbidity level (10, 20, and 40 NTU). Conclusions/Recommendations: Results show that sludge can be used in order to reduce the aluminum sulfate dosage and improve the quality of treated water. The highest turbidity removal efficiency is observed within 6 mg/l of aluminum sulfate and 35 mg/l of sludge in low turbidity, 20 mg/l of aluminum sulfate and 50 mg/l of sludge in medium turbidity and 20 mg/l of aluminum sulfate and 60 mg/l of sludge in high turbidity. The turbidity removal efficiency is 97.56%, 98.96%, and 99.47% respectively for low, medium and high turbidity levels.Keywords: coagulation process, coagulant dose, sludge reuse, turbidity removal
Procedia PDF Downloads 237987 Effect of Saponin Enriched Soapwort Powder on Structural and Sensorial Properties of Turkish Delight
Authors: Ihsan Burak Cam, Ayhan Topuz
Abstract:
Turkish delight has been produced by bleaching the plain delight mix (refined sugar, water and starch) via soapwort extract and powdered sugar. Soapwort extract which contains high amount of saponin, is an additive used in Turkish delight and tahini halvah production to improve consistency, chewiness and color due to its bioactive saponin content by acting as emulsifier. In this study, soapwort powder has been produced by determining optimum process conditions of soapwort extract by using response-surface method. This extract has been enriched with saponin by reverse osmosis (contains %63 saponin in dry bases). Büchi mini spray dryer B-290 was used to produce spray-dried soapwort powder (aw=0.254) from the enriched soapwort concentrate. Processing steps optimization and saponin content enrichment of soapwort extract has been tested on Turkish Delight production. Delight samples, produced by soapwort powder and commercial extract (control), were compared in chewiness, springiness, stickiness, adhesiveness, hardness, color and sensorial characteristics. According to the results, all textural properties except hardness of delights produced by powder were found to be statistically different than control samples. Chewiness, springiness, stickiness, adhesiveness and hardness values of samples (delights produced by the powder / control delights) were determined to be 361.9/1406.7, 0.095/0.251, -120.3/-51.7, 781.9/1869.3, 3427.3g/3118.4g, respectively. According to the quality analysis that has been ran with the end products it has been determined that; there is no statistically negative effect of the soapwort extract and the soapwort powder on the color and the appearance of Turkish Delight.Keywords: saponin, delight, soapwort powder, spray drying
Procedia PDF Downloads 253986 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming
Authors: Vildan Kistik, Tuncay Can
Abstract:
From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.Keywords: geometric programming, personnel selection, non-linear programming, operations research
Procedia PDF Downloads 271985 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm
Procedia PDF Downloads 304984 Culturable Diversity of Halophilic Bacteria in Chott Tinsilt, Algeria
Authors: Nesrine Lenchi, Salima Kebbouche-Gana, Laddada Belaid, Mohamed Lamine Khelfaoui, Mohamed Lamine Gana
Abstract:
Saline lakes are extreme hypersaline environments that are considered five to ten times saltier than seawater (150 – 300 g L-1 salt concentration). Hypersaline regions differ from each other in terms of salt concentration, chemical composition and geographical location, which determine the nature of inhabitant microorganisms. In order to explore the diversity of moderate and extreme halophiles Bacteria in Chott Tinsilt (East of Algeria), an isolation program was performed. In the first time, water samples were collected from the saltern during pre-salt harvesting phase. Salinity, pH and temperature of the sampling site were determined in situ. Chemical analysis of water sample indicated that Na +and Cl- were the most abundant ions. Isolates were obtained by plating out the samples in complex and synthetic media. In this study, seven halophiles cultures of Bacteria were isolated. Isolates were studied for Gram’s reaction, cell morphology and pigmentation. Enzymatic assays (oxidase, catalase, nitrate reductase and urease), and optimization of growth conditions were done. The results indicated that the salinity optima varied from 50 to 250 g L-1, whereas the optimum of temperature range from 25°C to 35°C. Molecular identification of the isolates was performed by sequencing the 16S rRNA gene. The results showed that these cultured isolates included members belonging to the Halomonas, Staphylococcus, Salinivibrio, Idiomarina, Halobacillus Thalassobacillus and Planococcus genera and what may represent a new bacterial genus.Keywords: bacteria, Chott, halophilic, 16S rRNA
Procedia PDF Downloads 281983 The Analysis of Emergency Shutdown Valves Torque Data in Terms of Its Use as a Health Indicator for System Prognostics
Authors: Ewa M. Laskowska, Jorn Vatn
Abstract:
Industry 4.0 focuses on digital optimization of industrial processes. The idea is to use extracted data in order to build a decision support model enabling use of those data for real time decision making. In terms of predictive maintenance, the desired decision support tool would be a model enabling prognostics of system's health based on the current condition of considered equipment. Within area of system prognostics and health management, a commonly used health indicator is Remaining Useful Lifetime (RUL) of a system. Because the RUL is a random variable, it has to be estimated based on available health indicators. Health indicators can be of different types and come from different sources. They can be process variables, equipment performance variables, data related to number of experienced failures, etc. The aim of this study is the analysis of performance variables of emergency shutdown valves (ESV) used in oil and gas industry. ESV is inspected periodically, and at each inspection torque and time of valve operation are registered. The data will be analyzed by means of machine learning or statistical analysis. The purpose is to investigate whether the available data could be used as a health indicator for a prognostic purpose. The second objective is to examine what is the most efficient way to incorporate the data into predictive model. The idea is to check whether the data can be applied in form of explanatory variables in Markov process or whether other stochastic processes would be a more convenient to build an RUL model based on the information coming from registered data.Keywords: emergency shutdown valves, health indicator, prognostics, remaining useful lifetime, RUL
Procedia PDF Downloads 91982 Actionable Personalised Learning Strategies to Improve a Growth-Mindset in an Educational Setting Using Artificial Intelligence
Authors: Garry Gorman, Nigel McKelvey, James Connolly
Abstract:
This study will evaluate a growth mindset intervention with Junior Cycle Coding and Senior Cycle Computer Science students in Ireland, where gamification will be used to incentivise growth mindset behaviour. An artificial intelligence (AI) driven personalised learning system will be developed to present computer programming learning tasks in a manner that is best suited to the individuals’ own learning preferences while incentivising and rewarding growth mindset behaviour of persistence, mastery response to challenge, and challenge seeking. This research endeavours to measure mindset with before and after surveys (conducted nationally) and by recording growth mindset behaviour whilst playing a digital game. This study will harness the capabilities of AI and aims to determine how a personalised learning (PL) experience can impact the mindset of a broad range of students. The focus of this study will be to determine how personalising the learning experience influences female and disadvantaged students' sense of belonging in the computer science classroom when tasks are presented in a manner that is best suited to the individual. Whole Brain Learning will underpin this research and will be used as a framework to guide the research in identifying key areas such as thinking and learning styles, cognitive potential, motivators and fears, and emotional intelligence. This research will be conducted in multiple school types over one academic year. Digital games will be played multiple times over this period, and the data gathered will be used to inform the AI algorithm. The three data sets are described as follows: (i) Before and after survey data to determine the grit scores and mindsets of the participants, (ii) The Growth Mind-Set data from the game, which will measure multiple growth mindset behaviours, such as persistence, response to challenge and use of strategy, (iii) The AI data to guide PL. This study will highlight the effectiveness of an AI-driven personalised learning experience. The data will position AI within the Irish educational landscape, with a specific focus on the teaching of CS. These findings will benefit coding and computer science teachers by providing a clear pedagogy for the effective delivery of personalised learning strategies for computer science education. This pedagogy will help prevent students from developing a fixed mindset while helping pupils to exhibit persistence of effort, use of strategy, and a mastery response to challenges.Keywords: computer science education, artificial intelligence, growth mindset, pedagogy
Procedia PDF Downloads 87981 Finite Element Molecular Modeling: A Structural Method for Large Deformations
Authors: A. Rezaei, M. Huisman, W. Van Paepegem
Abstract:
Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.Keywords: finite element, large deformation, molecular mechanics, structural method
Procedia PDF Downloads 152980 Optimization of Reliability Test Plans: Increase Wafer Fabrication Equipments Uptime
Authors: Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta, Ahmed Zeouita
Abstract:
Semiconductor processing chambers tend to operate in controlled but aggressive operating conditions (chemistry, plasma, high temperature etc.) Owing to this, the design of this equipment requires developing robust and reliable hardware and software. Any equipment downtime due to reliability issues can have cost implications both for customers in terms of tool downtime (reduced throughput) and for equipment manufacturers in terms of high warranty costs and customer trust deficit. A thorough reliability assessment of critical parts and a plan for preventive maintenance/replacement schedules need to be done before tool shipment. This helps to save significant warranty costs and tool downtimes in the field. However, designing a proper reliability test plan to accurately demonstrate reliability targets with proper sample size and test duration is quite challenging. This is mainly because components can fail in different failure modes that fit into different Weibull beta value distributions. Without apriori Weibull beta of a failure mode under consideration, it always leads to over/under utilization of resources, which eventually end up in false positives or false negatives estimates. This paper proposes a methodology to design a reliability test plan with optimal model size/duration/both (independent of apriori Weibull beta). This methodology can be used in demonstration tests and can be extended to accelerated life tests to further decrease sample size/test duration.Keywords: reliability, stochastics, preventive maintenance
Procedia PDF Downloads 15979 An Open Trial of Mobile-Assisted Cognitive Behavioral Therapy for Negative Symptoms in Schizophrenia: Pupillometry Predictors of Outcome
Authors: Eric Granholm, Christophe Delay, Jason Holden, Peter Link
Abstract:
Negative symptoms are an important unmet treatment needed for schizophrenia. We conducted an open trial of a novel blended intervention called mobile-assisted cognitive behavior therapy for negative symptoms (mCBTn). mCBTn is a weekly group therapy intervention combining in-person and smartphone-based CBT (CBT2go app) to improve experiential negative symptoms in people with schizophrenia. Both the therapy group and CBT2go app included recovery goal setting, thought challenging, scheduling of pleasurable activities and social interactions, and pleasure savoring interventions to modify defeatist attitudes, a target mechanism associated with negative symptoms, and improve experiential negative symptoms. We tested whether participants with schizophrenia or schizoaffective disorder (N=31) who met prospective criteria for persistent negative symptoms showed improvement in experiential negative symptoms. Retention was excellent (87% at 18 weeks) and severity of defeatist attitudes and motivation and pleasure negative symptoms declined significantly in mCBTn with large effect sizes. We also tested whether pupillary responses, a measure of cognitive effort, predicted improvement in negative symptoms mCBTn. Pupillary responses were recorded at baseline using a Tobii pupillometer during the digit span task with 3-, 6- and 9-digit spans. Mixed models showed that greater dilation during the task at baseline significantly predicted a greater reduction in experiential negative symptoms. Pupillary responses may provide a much-needed prognostic biomarker of which patients are most likely to benefit from CBT. Greater pupil dilation during a cognitive task predicted greater improvement in experiential negative symptoms. Pupil dilation has been linked to motivation and engagement of executive control, so these factors may contribute to benefits in interventions that train cognitive skills to manage negative thoughts and emotions. The findings suggest mCBTn is a feasible and effective treatment for experiential negative symptoms and justify a larger randomized controlled clinical trial. The findings also provide support for the defeatist attitude model of experiential negative symptoms and suggest that mobile-assisted interventions like mCBTn can strengthen and shorten intensive psychosocial interventions for schizophrenia.Keywords: cognitive-behavioral therapy, mobile interventions, negative symptoms, pupillometry schizophrenia
Procedia PDF Downloads 180978 A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects
Authors: Wi Sung Yoo, Seung Woo Lee, Youn Kyoung Hur, Sung Hwan Kim
Abstract:
Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies.Keywords: key performance indicator, performance measurement, structured and unstructured data, data augmentation
Procedia PDF Downloads 42977 Conditions of the Anaerobic Digestion of Biomass
Authors: N. Boontian
Abstract:
Biological conversion of biomass to methane has received increasing attention in recent years. Grasses have been explored for their potential anaerobic digestion to methane. In this review, extensive literature data have been tabulated and classified. The influences of several parameters on the potential of these feedstocks to produce methane are presented. Lignocellulosic biomass represents a mostly unused source for biogas and ethanol production. Many factors, including lignin content, crystallinity of cellulose, and particle size, limit the digestibility of the hemicellulose and cellulose present in the lignocellulosic biomass. Pretreatments have used to improve the digestibility of the lignocellulosic biomass. Each pretreatment has its own effects on cellulose, hemicellulose and lignin, the three main components of lignocellulosic biomass. Solid-state anaerobic digestion (SS-AD) generally occurs at solid concentrations higher than 15%. In contrast, liquid anaerobic digestion (AD) handles feedstocks with solid concentrations between 0.5% and 15%. Animal manure, sewage sludge, and food waste are generally treated by liquid AD, while organic fractions of municipal solid waste (OFMSW) and lignocellulosic biomass such as crop residues and energy crops can be processed through SS-AD. An increase in operating temperature can improve both the biogas yield and the production efficiency, other practices such as using AD digestate or leachate as an inoculant or decreasing the solid content may increase biogas yield but have negative impact on production efficiency. Focus is placed on substrate pretreatment in anaerobic digestion (AD) as a means of increasing biogas yields using today’s diversified substrate sources.Keywords: anaerobic digestion, lignocellulosic biomass, methane production, optimization, pretreatment
Procedia PDF Downloads 379976 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 119975 Commercial Automobile Insurance: A Practical Approach of the Generalized Additive Model
Authors: Nicolas Plamondon, Stuart Atkinson, Shuzi Zhou
Abstract:
The insurance industry is usually not the first topic one has in mind when thinking about applications of data science. However, the use of data science in the finance and insurance industry is growing quickly for several reasons, including an abundance of reliable customer data, ferocious competition requiring more accurate pricing, etc. Among the top use cases of data science, we find pricing optimization, customer segmentation, customer risk assessment, fraud detection, marketing, and triage analytics. The objective of this paper is to present an application of the generalized additive model (GAM) on a commercial automobile insurance product: an individually rated commercial automobile. These are vehicles used for commercial purposes, but for which there is not enough volume to apply pricing to several vehicles at the same time. The GAM model was selected as an improvement over GLM for its ease of use and its wide range of applications. The model was trained using the largest split of the data to determine model parameters. The remaining part of the data was used as testing data to verify the quality of the modeling activity. We used the Gini coefficient to evaluate the performance of the model. For long-term monitoring, commonly used metrics such as RMSE and MAE will be used. Another topic of interest in the insurance industry is to process of producing the model. We will discuss at a high level the interactions between the different teams with an insurance company that needs to work together to produce a model and then monitor the performance of the model over time. Moreover, we will discuss the regulations in place in the insurance industry. Finally, we will discuss the maintenance of the model and the fact that new data does not come constantly and that some metrics can take a long time to become meaningful.Keywords: insurance, data science, modeling, monitoring, regulation, processes
Procedia PDF Downloads 76974 Introducing Principles of Land Surveying by Assigning a Practical Project
Authors: Introducing Principles of Land Surveying by Assigning a Practical Project
Abstract:
A practical project is used in an engineering surveying course to expose sophomore and junior civil engineering students to several important issues related to the use of basic principles of land surveying. The project, which is the design of a two-lane rural highway to connect between two arbitrary points, requires students to draw the profile of the proposed highway along with the existing ground level. Areas of all cross-sections are then computed to enable quantity computations between them. Lastly, Mass-Haul Diagram is drawn with all important parts and features shown on it for clarity. At the beginning, students faced challenges getting started on the project. They had to spend time and effort thinking of the best way to proceed and how the work would flow. It was even more challenging when they had to visualize images of cut, fill and mixed cross sections in three dimensions before they can draw them to complete the necessary computations. These difficulties were then somewhat overcome with the help of the instructor and thorough discussions among team members and/or between different teams. The method of assessment used in this study was a well-prepared-end-of-semester questionnaire distributed to students after the completion of the project and the final exam. The survey contained a wide spectrum of questions from students' learning experience when this course development was implemented to students' satisfaction of the class instructions provided to them and the instructor's competency in presenting the material and helping with the project. It also covered the adequacy of the project to show a sample of a real-life civil engineering application and if there is any excitement added by implementing this idea. At the end of the questionnaire, students had the chance to provide their constructive comments and suggestions for future improvements of the land surveying course. Outcomes will be presented graphically and in a tabular format. Graphs provide visual explanation of the results and tables, on the other hand, summarize numerical values for each student along with some descriptive statistics, such as the mean, standard deviation, and coefficient of variation for each student and each question as well. In addition to gaining experience in teamwork, communications, and customer relations, students felt the benefit of assigning such a project. They noticed the beauty of the practical side of civil engineering work and how theories are utilized in real-life engineering applications. It was even recommended by students that such a project be exercised every time this course is offered so future students can have the same learning opportunity they had.Keywords: land surveying, highway project, assessment, evaluation, descriptive statistics
Procedia PDF Downloads 229973 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques
Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet
Abstract:
5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics
Procedia PDF Downloads 63972 The Principal-Agent Model with Moral Hazard in the Brazilian Innovation System: The Case of 'Lei do Bem'
Authors: Felippe Clemente, Evaldo Henrique da Silva
Abstract:
The need to adopt some type of industrial policy and innovation in Brazil is a recurring theme in the discussion of public interventions aimed at boosting economic growth. For many years, the country has adopted various policies to change its productive structure in order to increase the participation of sectors that would have the greatest potential to generate innovation and economic growth. Only in the 2000s, tax incentives as a policy to support industrial and technological innovation are being adopted in Brazil as a phenomenon associated with rates of productivity growth and economic development. In this context, in late 2004 and 2005, Brazil reformulated its institutional apparatus for innovation in order to approach the OECD conventions and the Frascati Manual. The Innovation Law (2004) and the 'Lei do Bem' (2005) reduced some institutional barriers to innovation, provided incentives for university-business cooperation, and modified access to tax incentives for innovation. Chapter III of the 'Lei do Bem' (no. 11,196/05) is currently the most comprehensive fiscal incentive to stimulate innovation. It complies with the requirements, which stipulates that the Union should encourage innovation in the company or industry by granting tax incentives. With its introduction, the bureaucratic procedure was simplified by not requiring pre-approval of projects or participation in bidding documents. However, preliminary analysis suggests that this instrument has not yet been able to stimulate the sector diversification of these investments in Brazil, since its benefits are mostly captured by sectors that already developed this activity, thus showing problems with moral hazard. It is necessary, then, to analyze the 'Lei do Bem' to know if there is indeed the need for some change, investigating what changes should be implanted in the Brazilian innovation policy. This work, therefore, shows itself as a first effort to analyze a current national problem, evaluating the effectiveness of the 'Lei do Bem' and suggesting public policies that help and direct the State to the elaboration of legislative laws capable of encouraging agents to follow what they describes. As a preliminary result, it is known that 130 firms used fiscal incentives for innovation in 2006, 320 in 2007 and 552 in 2008. Although this number is on the rise, it is still small, if it is considered that there are around 6 thousand firms that perform Research and Development (R&D) activities in Brazil. Moreover, another obstacle to the 'Lei do Bem' is the percentages of tax incentives provided to companies. These percentages reveal a significant sectoral correlation between R&D expenditures of large companies and R&D expenses of companies that accessed the 'Lei do Bem', reaching a correlation of 95.8% in 2008. With these results, it becomes relevant to investigate the law's ability to stimulate private investments in R&D.Keywords: brazilian innovation system, moral hazard, R&D, Lei do Bem
Procedia PDF Downloads 337971 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)
Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude
Abstract:
Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.Keywords: Coconut, Melon, Optimization, Processing
Procedia PDF Downloads 442970 Disconnect between Water, Sanitation and Hygiene Related Behaviours of Children in School and Family
Authors: Rehan Mohammad
Abstract:
Background: Improved Water, Sanitation and Hygiene (WASH) practices in schools ensure children’s health, well-being and cognitive performance. In India under various WASH interventions in schools, teachers, and other staff make every possible effort to educate children about personal hygiene, sanitation practices and harms of open defecation. However, once children get back to their families, they see other practicing inappropriate WASH behaviors, and they consequently start following them. This show disconnect between school behavior and family behavior, which needs to be bridged to achieve desired WASH outcomes. Aims and Objectives: The aim of this study is to assess the factors causing disconnect of WASH-related behaviors between school and the family of children. It also suggests behavior change interventions to bridge the gap. Methodology: The present study has chosen a mixed- method approach. Both quantitative and qualitative methods of data collection have been used in the present study. The purposive sampling for data collection has been chosen. The data have been collected from 20% children in each age group of 04-08 years and 09-12 years spread over three primary schools and 20% of households to which they belong to which is spread over three slum communities in south district of Delhi. Results: The present study shows that despite of several behavior change interventions at school level, children still practice inappropriate WASH behaviors due to disconnect between school and family behaviors. These behaviors show variation from one age group to another. The inappropriate WASH behaviors being practiced by children include open defecation, wrong disposal of garbage, not keeping personal hygiene, not practicing hand washing practices during critical junctures and not washing fruits and vegetables before eating. The present study has highlighted that 80% of children in the age group of 04-08 years still practice inappropriate WASH behaviors when they go back to their families after school whereas, this percentage has reduced to 40% in case of children in the age group 09-12 years. Present study uncovers association between school and family teaching which creates a huge gap between WASH-related behavioral practices. The study has established that children learn and de-learn the WASH behaviors due to the evident disconnect between behavior change interventions at schools and household level. The study has also made it clear that children understand the significance of appropriate WASH practices but owing to the disconnect the behaviors remain unsettled. The study proposes several behavior change interventions to sync the behaviors of children at school and family level to ensure children’s health, well-being and cognitive performance.Keywords: behavioral interventions, child health, family behavior, school behavior, WASH
Procedia PDF Downloads 111969 Optimization of Gastro-Retentive Matrix Formulation and Its Gamma Scintigraphic Evaluation
Authors: Swapnila V. Shinde, Hemant P. Joshi, Sumit R. Dhas, Dhananjaysingh B. Rajput
Abstract:
The objective of the present study is to develop hydro-dynamically balanced system for atenolol, β-blocker as a single unit floating tablet. Atenolol shows pH dependent solubility resulting into a bioavailability of 36%. Thus, site specific oral controlled release floating drug delivery system was developed. Formulation includes novice use of rate controlling polymer such as locust bean gum (LBG) in combination of HPMC K4M and gas generating agent sodium bicarbonate. Tablet was prepared by direct compression method and evaluated for physico-mechanical properties. The statistical method was utilized to optimize the effect of independent variables, namely amount of HPMC K4M, LBG and three dependent responses such as cumulative drug release, floating lag time, floating time. Graphical and mathematical analysis of the results allowed the identification and quantification of the formulation variables influencing the selected responses. To study the gastrointestinal transit of the optimized gastro-retentive formulation, in vivo gamma scintigraphy was carried out in six healthy rabbits, after radio labeling the formulation with 99mTc. The transit profiles demonstrated that the dosage form was retained in the stomach for more than 5 hrs. The study signifies the potential of the developed system for stomach targeted delivery of atenolol with improved bioavailability.Keywords: floating tablet, factorial design, gamma scintigraphy, antihypertensive model drug, HPMC, locust bean gum
Procedia PDF Downloads 275968 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 66967 Interaction Evaluation of Silver Ion and Silver Nanoparticles with Dithizone Complexes Using DFT Calculations and NMR Analysis
Authors: W. Nootcharin, S. Sujittra, K. Mayuso, K. Kornphimol, M. Rawiwan
Abstract:
Silver has distinct antibacterial properties and has been used as a component of commercial products with many applications. An increasing number of commercial products cause risks of silver effects for human and environment such as the symptoms of Argyria and the release of silver to the environment. Therefore, the detection of silver in the aquatic environment is important. The colorimetric chemosensor is designed by the basic of ligand interactions with a metal ion, leading to the change of signals for the naked-eyes which are very useful method to this application. Dithizone ligand is considered as one of the effective chelating reagents for metal ions due to its high selectivity and sensitivity of a photochromic reaction for silver as well as the linear backbone of dithizone affords the rotation of various isomeric forms. The present study is focused on the conformation and interaction of silver ion and silver nanoparticles (AgNPs) with dithizone using density functional theory (DFT). The interaction parameters were determined in term of binding energy of complexes and the geometry optimization, frequency of the structures and calculation of binding energies using density functional approaches B3LYP and the 6-31G(d,p) basis set. Moreover, the interaction of silver–dithizone complexes was supported by UV–Vis spectroscopy, FT-IR spectrum that was simulated by using B3LYP/6-31G(d,p) and 1H NMR spectra calculation using B3LYP/6-311+G(2d,p) method compared with the experimental data. The results showed the ion exchange interaction between hydrogen of dithizone and silver atom, with minimized binding energies of silver–dithizone interaction. However, the result of AgNPs in the form of complexes with dithizone. Moreover, the AgNPs-dithizone complexes were confirmed by using transmission electron microscope (TEM). Therefore, the results can be the useful information for determination of complex interaction using the analysis of computer simulations.Keywords: silver nanoparticles, dithizone, DFT, NMR
Procedia PDF Downloads 207