Search results for: steering angle error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3269

Search results for: steering angle error

299 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector

Authors: Dewan Ahsan

Abstract:

Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.

Keywords: green energy, offshore, safety, Denmark

Procedia PDF Downloads 190
298 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 147
297 Dual-Layer Microporous Layer of Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Various RH Conditions

Authors: Grigoria Athanasaki, Veerarajan Vimala, A. M. Kannan, Louis Cindrella

Abstract:

Energy usage has been increased throughout the years, leading to severe environmental impacts. Since the majority of the energy is currently produced from fossil fuels, there is a global need for clean energy solutions. Proton Exchange Membrane Fuel Cells (PEMFCs) offer a very promising solution for transportation applications because of their solid configuration and low temperature operations, which allows them to start quickly. One of the main components of PEMFCs is the Gas Diffusion Layer (GDL), which manages water and gas transport and shows direct influence on the fuel cell performance. In this work, a novel dual-layer GDL with gradient porosity was prepared, using polyethylene glycol (PEG) as pore former, to improve the gas diffusion and water management in the system. The microporous layer (MPL) of the fabricated GDL consists of carbon powder PUREBLACK, sodium dodecyl sulfate as a surfactant, 34% wt. PTFE and the gradient porosity was created by applying one layer using 30% wt. PEG on the carbon substrate, followed by a second layer without using any pore former. The total carbon loading of the microporous layer is ~ 3 mg.cm-2. For the assembly of the catalyst layer, Nafion membrane (Ion Power, Nafion Membrane NR211) and Pt/C electrocatalyst (46.1% wt.) were used. The catalyst ink was deposited on the membrane via microspraying technique. The Pt loading is ~ 0.4 mg.cm-2, and the active area is 5 cm2. The sample was ex-situ characterized via wetting angle measurement, Scanning Electron Microscopy (SEM), and Pore Size Distribution (PSD) to evaluate its characteristics. Furthermore, for the performance evaluation in-situ characterization via Fuel Cell Testing using H2/O2 and H2/air as reactants, under 50, 60, 80, and 100% relative humidity (RH), took place. The results were compared to a single layer GDL, fabricated with the same carbon powder and loading as the dual layer GDL, and a commercially available GDL with MPL (AvCarb2120). The findings reveal high hydrophobic properties of the microporous layer of the GDL for both PUREBLACK based samples, while the commercial GDL demonstrates hydrophilic behavior. The dual layer GDL shows high and stable fuel cell performance under all the RH conditions, whereas the single layer manifests a drop in performance at high RH in both oxygen and air, caused by catalyst flooding. The commercial GDL shows very low and unstable performance, possibly because of its hydrophilic character and thinner microporous layer. In conclusion, the dual layer GDL with PEG appears to have improved gas diffusion and water management in the fuel cell system. Due to its increasing porosity from the catalyst layer to the carbon substrate, it allows easier access of the reactant gases from the flow channels to the catalyst layer, and more efficient water removal from the catalyst layer, leading to higher performance and stability.

Keywords: gas diffusion layer, microporous layer, proton exchange membrane fuel cells, relative humidity

Procedia PDF Downloads 105
296 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 91
295 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 87
294 A Disappearing Radiolucency of the Mandible Caused by Inadvertent Trauma Following IMF Screw Placement

Authors: Anna Ghosh, Dominic Shields, Ceri McIntosh, Stephen Crank

Abstract:

A 29-year-old male was a referral to the maxillofacial unit following a referral from his general dental practitioner via a routine pathway regarding a large periapical lesion on the LR4 with root resorption. The patient was asymptomatic, the LR4 vital and unrestored, and this was an incidental finding at a routine check-up. The patient's past medical history was unremarkable. Examination revealed no extra or intra-oral pathology and non-mobile teeth. No focal neurology was detected. An orthopantogram demonstrated a well-defined unilocular corticated radiolucency associated with the LR4. The root appeared shortened with the radiolucency between the root and a radio-opacity, possibly representing the displacement of the apical tip of the tooth. It was recommended that the referring general practitioner should proceed with orthograde root canal therapy, after which time exploration, enucleation, and retrograde root filling of the LR4 would be carried out by a maxillofacial unit. The patient was reviewed six months later where, due to the COVID-19 pandemic, the patient had been unable to access general dental services for the root canal treatment. He was still entirely asymptomatic. A one-year review was planned in the hope this would allow time for the orthograde root canal therapy to be completed. At this review, the orthograde root canal therapy had still not been completed. Interestingly, a repeat orthopantogram revealed a significant reduction in size with good bony infill and a significant reduction in the size of the lesion. Due to the ongoing delays with primary care dental therapy, the patient was subsequently internally referred to the restorative dentistry department for care. The patient was seen again by oral and maxillo-facial surgery in mid-2022 where he still reports this tooth as asymptomatic with no focal neurology. The patient's history was fully reviewed, and noted that 15 years previously, the patient underwent open reduction and internal fixation of a left angle of mandible fracture. Temporary IMF involving IMF screws and fixation wires were employed to maintain occlusion during plating and subsequently removed post-operatively. It is proposed that the radiolucency was, as a result of the IMF screw placement, penetrating the LR4 root resulting in resorption of the tooth root and development of a radiolucency. This case highlights the importance of careful screw size and physical site location, and placement of IMF screws, as there can be permeant damage to a patient’s dentition.

Keywords: facial trauma, inter-maxillary fixation, mandibular radiolucency, oral and maxillo-facial surgery

Procedia PDF Downloads 96
293 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 130
292 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 196
291 Investigation of the Psychological and Sociological Consequences of Facebook Usage towards Saudi Arabia University Students

Authors: Abdullah Alassiri

Abstract:

Prompted by the widespread saturation of Facebook usage in Saudi Arabia, among university students to socialize with online members, this study investigated the usage, self-presentation, psychological and sociological consequences of the Facebook social networking site among undergraduate students in Saudi Arabia. The problem statement of this study was addressed by answering the following questions: 1) What motivation do undergraduate students have for joining Facebook? 2) How do undergraduate students consume Facebook? 3) In what condition do undergraduate students need Facebook? 4) How do undergraduate students manage their self-presentation via Facebook? 5) What are the experiences obtained by the undergraduate students from Facebook psychologically? 6) What are the experiences obtained by the undergraduate students from Facebook sociologically? 7) How have Facebook activities affected the lifestyle of the undergraduate students?. These questions were answered by analyzing in-depth interview data collected from twenty male undergraduate students between the ages of 18 and 24 years selected from King Saud University (KSU) and King Khalid University (KKU) Saudi Arabia. Using thematic analysis, informants data were coded ‘R1 to R20’, validated and was transcribed to minimize error from translating into the study items from Arabic back to the English Language. Using purposive sampling method, informant perspective within the research context were explored. Data collection was confined to students’ motivations for engaging in online activities, self-presentation, psychological and sociological consequences to their everyday life was investigated based on the theoretical and philosophical perspective underpinnings media and gratification paradigm and social influence theory. The findings contributed to the development of important study themes that supported the development of a new research framework. Based on the analysis, all the study questions were answered. The findings of this study showed that the students use Facebook for the purpose of interacting with others, getting information and as knowledge sources. In terms of self-presentation, this study revealed that the students portray themselves in the real and not fake image while socializing with others. Psychological and sociological consequences from the usage of Facebook are recorded ranging from cheerful to stress and from loneliness to having many friends. As a conclusion, this study conclusively drew that Facebook is a very persuasive medium of communication among the University students in Saudi Arabia that bridges across socio-cultural boundaries and unite students to interact as a community.

Keywords: Saudi Arabia, Facebook, undergraduate students, social network

Procedia PDF Downloads 139
290 Newspaper Headlines as Tool for Political Propaganda in Nigeria: Trend Analysis of Implications on Four Presidential Elections

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of the media in political discourse cannot be overemphasized as they form an important part of societal development. The media institution is considered the fourth estate of the realm because it serves as a check and balance to the arms of government (Executive, Legislature and Judiciary) especially in a democratic setup, and makes public office holders accountable to the people. They scrutinize the political candidates and conduct a holistic analysis of the achievement of the government in order to make the people’s representative accountable to the electorates. The media in Nigeria play a seminal role in shaping how people vote during elections. Newspaper headlines are catchy phrases that easily capture the attention of the audience and call them (audience) to action. Research conducted on newspaper headlines looks at the linguistic aspect and how the tenses used has a resultant effect on peoples’ attitude and behaviour. Communication scholars have also conducted studies that interrogate whether newspaper headlines influence peoples' voting patterns and decisions. Propaganda and negative stories about political opponents are stapling features in electioneering campaigns. Nigerian newspaper readers have the characteristic of scanning newspaper headlines. And the question is whether politicians effectively have played into this tendency to brand opponents negatively, based on half-truths and inadequate information. This study illustrates major trends in the Nigerian political landscape looking at the past four presidential elections and frames the progress of the research in the extant body of political propaganda research in Africa. The study will use the quantitative content analysis of newspaper headlines from 2007 to 2019 to be able to ascertain whether newspaper headlines had any effect on the election results of the presidential elections during these years. This will be supplemented by Key Informant Interviews of political scientists or experts to draw further inferences from the quantitative data. Drawing on newspaper headlines of selected newspapers in Nigeria that have a political propaganda angle for the presidential elections, the analysis will correspond to and complements extant descriptions of how the field of political propaganda has been developed in Nigeria, providing evidence of four presidential elections that have shaped Nigerian politics. Understanding the development of the behavioural change of the electorates provide useful context for trend analysis in political propaganda communication. The findings will contribute to how newspaper headlines are used partly or wholly to decide the outcome of presidential elections in Nigeria.

Keywords: newspaper headlines, political propaganda, presidential elections, trend analysis

Procedia PDF Downloads 210
289 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva

Authors: Sevde Altuntas, Fatih Buyukserin

Abstract:

Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.

Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy

Procedia PDF Downloads 268
288 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 187
287 A Heteroskedasticity Robust Test for Contemporaneous Correlation in Dynamic Panel Data Models

Authors: Andreea Halunga, Chris D. Orme, Takashi Yamagata

Abstract:

This paper proposes a heteroskedasticity-robust Breusch-Pagan test of the null hypothesis of zero cross-section (or contemporaneous) correlation in linear panel-data models, without necessarily assuming independence of the cross-sections. The procedure allows for either fixed, strictly exogenous and/or lagged dependent regressor variables, as well as quite general forms of both non-normality and heteroskedasticity in the error distribution. The asymptotic validity of the test procedure is predicated on the number of time series observations, T, being large relative to the number of cross-section units, N, in that: (i) either N is fixed as T→∞; or, (ii) N²/T→0, as both T and N diverge, jointly, to infinity. Given this, it is not expected that asymptotic theory would provide an adequate guide to finite sample performance when T/N is "small". Because of this, we also propose and establish asymptotic validity of, a number of wild bootstrap schemes designed to provide improved inference when T/N is small. Across a variety of experimental designs, a Monte Carlo study suggests that the predictions from asymptotic theory do, in fact, provide a good guide to the finite sample behaviour of the test when T is large relative to N. However, when T and N are of similar orders of magnitude, discrepancies between the nominal and empirical significance levels occur as predicted by the first-order asymptotic analysis. On the other hand, for all the experimental designs, the proposed wild bootstrap approximations do improve agreement between nominal and empirical significance levels, when T/N is small, with a recursive-design wild bootstrap scheme performing best, in general, and providing quite close agreement between the nominal and empirical significance levels of the test even when T and N are of similar size. Moreover, in comparison with the wild bootstrap "version" of the original Breusch-Pagan test our experiments indicate that the corresponding version of the heteroskedasticity-robust Breusch-Pagan test appears reliable. As an illustration, the proposed tests are applied to a dynamic growth model for a panel of 20 OECD countries.

Keywords: cross-section correlation, time-series heteroskedasticity, dynamic panel data, heteroskedasticity robust Breusch-Pagan test

Procedia PDF Downloads 409
286 The Legal Nature of Grading Decisions and the Implications for Handling of Academic Complaints in or out of Court: A Comparative Legal Analysis of Academic Litigation in Europe

Authors: Kurt Willems

Abstract:

This research examines complaints against grading in higher education institutions in four different European regions: England and Wales, Flanders, the Netherlands, and France. The aim of the research is to examine the correlation between the applicable type of complaint handling on the one hand, and selected qualities of the higher education landscape and of public law on the other hand. All selected regions report a rising number of complaints against grading decisions, not only as to internal complaint handling within the institution but also judicially if the dispute persists. Some regions deem their administrative court system appropriate to deal with grading disputes (France) or have even erected a specialty administrative court to facilitate access (Flanders, the Netherlands). However, at the same time, different types of (governmental) dispute resolution bodies have been established outside of the judicial court system (England and Wales, and to lesser extent France and the Netherlands). Those dispute procedures do not seem coincidental. Public law issues such as the underlying legal nature of the education institution and, eventually, the grading decision itself, have an impact on the way the academic complaint procedures are developed. Indeed, in most of the selected regions, contractual disputes enjoy different legal protection than administrative decisions, making the legal qualification of the relationship between student and higher education institution highly relevant. At the same time, the scope of competence of government over different types of higher education institutions; albeit direct or indirect (o.a. through financing and quality control) is relevant as well to comprehend why certain dispute handling procedures have been established for students. To answer the above questions, the doctrinal and comparative legal method is used. The normative framework is distilled from the relevant national legislative rules and their preparatory texts, the legal literature, the (published) case law of academic complaints and the available governmental reports. The research is mainly theoretical in nature, examining different topics of public law (mainly administrative law) and procedural law in the context of grading decisions. The internal appeal procedure within the education institution is largely left out of the scope of the research, as well as different types of non-governmental-imposed cooperation between education institutions, given the public law angle of the research questions. The research results in the categorization of different academic complaint systems, and an analysis of the possibility to introduce each of those systems in different countries, depending on their public law system and higher education system. By doing so, the research also adds to the debate on the public-private divide in higher education systems, and its effect on academic complaints handling.

Keywords: higher education, legal qualification of education institution, legal qualification of grading decisions, legal protection of students, academic litigation

Procedia PDF Downloads 207
285 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 103
284 Trajectory Tracking of Fixed-Wing Unmanned Aerial Vehicle Using Fuzzy-Based Sliding Mode Controller

Authors: Feleke Tsegaye

Abstract:

The work in this thesis mainly focuses on trajectory tracking of fixed wing unmanned aerial vehicle (FWUAV) by using fuzzy based sliding mode controller(FSMC) for surveillance applications. Unmanned Aerial Vehicles (UAVs) are general-purpose aircraft built to fly autonomously. This technology is applied in a variety of sectors, including the military, to improve defense, surveillance, and logistics. The model of FWUAV is complex due to its high non-linearity and coupling effect. In this thesis, input decoupling is done through extracting the dominant inputs during the design of the controller and considering the remaining inputs as uncertainty. The proper and steady flight maneuvering of UAVs under uncertain and unstable circumstances is the most critical problem for researchers studying UAVs. A FSMC technique was suggested to tackle the complexity of FWUAV systems. The trajectory tracking control algorithm primarily uses the sliding-mode (SM) variable structure control method to address the system’s control issue. In the SM control, a fuzzy logic control(FLC) algorithm is utilized in place of the discontinuous phase of the SM controller to reduce the chattering impact. In the reaching and sliding stages of SM control, Lyapunov theory is used to assure finite-time convergence. A comparison between the conventional SM controller and the suggested controller is done in relation to the chattering effect as well as tracking performance. It is evident that the chattering is effectively reduced, the suggested controller provides a quick response with a minimum steady-state error, and the controller is robust in the face of unknown disturbances. The designed control strategy is simulated with the nonlinear model of FWUAV using the MATLAB® / Simulink® environments. The simulation result shows the suggested controller operates effectively, maintains an aircraft’s stability, and will hold the aircraft’s targeted flight path despite the presence of uncertainty and disturbances.

Keywords: fixed-wing UAVs, sliding mode controller, fuzzy logic controller, chattering, coupling effect, surveillance, finite-time convergence, Lyapunov theory, flight path

Procedia PDF Downloads 22
283 Polypropylene Matrix Enriched With Silver Nanoparticles From Banana Peel Extract For Antimicrobial Control Of E. coli and S. epidermidis To Maintain Fresh Food

Authors: Michail Milas, Aikaterini Dafni Tegiou, Nickolas Rigopoulos, Eustathios Giaouris, Zaharias Loannou

Abstract:

Nanotechnology, a relatively new scientific field, addresses the manipulation of nanoscale materials and devices, which are governed by unique properties, and is applied in a wide range of industries, including food packaging. The incorporation of nanoparticles into polymer matrices used for food packaging is a field that is highly researched today. One such combination is silver nanoparticles with polypropylene. In the present study, the synthesis of the silver nanoparticles was carried out by a natural method. In particular, a ripe banana peel extract was used. This method is superior to others as it stands out for its environmental friendliness, high efficiency and low-cost requirement. In particular, a 1.75 mM AgNO₃ silver nitrate solution was used, as well as a BPE concentration of 1.7% v/v, an incubation period of 48 hours at 70°C and a pH of 4.3 and after its preparation, the polypropylene films were soaked in it. For the PP films, random PP spheres were melted at 170-190°C into molds with 0.8cm diameter. This polymer was chosen as it is suitable for plastic parts and reusable plastic containers of various types that are intended to come into contact with food without compromising its quality and safety. The antimicrobial test against Escherichia coli DFSNB1 and Staphylococcus epidermidis DFSNB4 was performed on the films. It appeared that the films with silver nanoparticles had a reduction, at least 100 times, compared to those without silver nanoparticles, in both strains. The limit of detection is the lower limit of the vertical error lines in the presence of nanoparticles, which is 3.11. The main reasons that led to the adsorption of nanoparticles are the porous nature of polypropylene and the adsorption capacity of nanoparticles on the surface of the films due to hydrophobic-hydrophilic forces. The most significant parameters that contributed to the results of the experiment include the following: the stage of ripening of the banana during the preparation of the plant extract, the temperature and residence time of the nanoparticle solution in the oven, the residence time of the polypropylene films in the nanoparticle solution, the number of nanoparticles inoculated on the films and, finally, the time these stayed in the refrigerator so that they could dry and be ready for antimicrobial treatment.

Keywords: antimicrobial control, banana peel extract, E. coli, natural synthesis, microbe, plant extract, polypropylene films, S.epidermidis, silver nano, random pp

Procedia PDF Downloads 146
282 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 43
281 A Dynamic Mechanical Thermal T-Peel Test Approach to Characterize Interfacial Behavior of Polymeric Textile Composites

Authors: J. R. Büttler, T. Pham

Abstract:

Basic understanding of interfacial mechanisms is of importance for the development of polymer composites. For this purpose, we need techniques to analyze the quality of interphases, their chemical and physical interactions and their strength and fracture resistance. In order to investigate the interfacial phenomena in detail, advanced characterization techniques are favorable. Dynamic mechanical thermal analysis (DMTA) using a rheological system is a sensitive tool. T-peel tests were performed with this system, to investigate the temperature-dependent peel behavior of woven textile composites. A model system was made of polyamide (PA) woven fabric laminated with films of polypropylene (PP) or PP modified by grafting with maleic anhydride (PP-g-MAH). Firstly, control measurements were performed with solely PP matrixes. Polymer melt investigations, as well as the extensional stress, extensional viscosity and extensional relaxation modulus at -10°C, 100 °C and 170 °C, demonstrate similar viscoelastic behavior for films made of PP-g-MAH and its non-modified PP-control. Frequency sweeps have shown that PP-g-MAH has a zero phase viscosity of around 1600 Pa·s and PP-control has a similar zero phase viscosity of 1345 Pa·s. Also, the gelation points are similar at 2.42*104 Pa (118 rad/s) and 2.81*104 Pa (161 rad/s) for PP-control and PP-g-MAH, respectively. Secondly, the textile composite was analyzed. The extensional stress of PA66 fabric laminated with either PP-control or PP-g-MAH at -10 °C, 25 °C and 170 °C for strain rates of 0.001 – 1 s-1 was investigated. The laminates containing the modified PP need more stress for T-peeling. However, the strengthening effect due to the modification decreases by increasing temperature and at 170 °C, just above the melting temperature of the matrix, the difference disappears. Independent of the matrix used in the textile composite, there is a decrease of extensional stress by increasing temperature. It appears that the more viscous is the matrix, the weaker the laminar adhesion. Possibly, the measurement is influenced by the fact that the laminate becomes stiffer at lower temperatures. Adhesive lap-shear testing at room temperature supports the findings obtained with the T-peel test. Additional analysis of the textile composite at the microscopic level ensures that the fibers are well embedded in the matrix. Atomic force microscopy (AFM) imaging of a cross section of the composite shows no gaps between the fibers and matrix. Measurements of the water contact angle show that the MAH grafted PP is more polar than the virgin-PP, and that suggests a more favorable chemical interaction of PP-g-MAH with PA, compared to the non-modified PP. In fact, this study indicates that T-peel testing by DMTA is a technique to achieve more insights into polymeric textile composites.

Keywords: dynamic mechanical thermal analysis, interphase, polyamide, polypropylene, textile composite

Procedia PDF Downloads 109
280 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty

Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton

Abstract:

Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.

Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique

Procedia PDF Downloads 53
279 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 121
278 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 49
277 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis

Procedia PDF Downloads 568
276 Landsat Data from Pre Crop Season to Estimate the Area to Be Planted with Summer Crops

Authors: Valdir Moura, Raniele dos Anjos de Souza, Fernando Gomes de Souza, Jose Vagner da Silva, Jerry Adriani Johann

Abstract:

The estimate of the Area of Land to be planted with annual crops and its stratification by the municipality are important variables in crop forecast. Nowadays in Brazil, these information’s are obtained by the Brazilian Institute of Geography and Statistics (IBGE) and published under the report Assessment of the Agricultural Production. Due to the high cloud cover in the main crop growing season (October to March) it is difficult to acquire good orbital images. Thus, one alternative is to work with remote sensing data from dates before the crop growing season. This work presents the use of multitemporal Landsat data gathered on July and September (before the summer growing season) in order to estimate the area of land to be planted with summer crops in an area of São Paulo State, Brazil. Geographic Information Systems (GIS) and digital image processing techniques were applied for the treatment of the available data. Supervised and non-supervised classifications were used for data in digital number and reflectance formats and the multitemporal Normalized Difference Vegetation Index (NDVI) images. The objective was to discriminate the tracts with higher probability to become planted with summer crops. Classification accuracies were evaluated using a sampling system developed basically for this study region. The estimated areas were corrected using the error matrix derived from these evaluations. The classification techniques presented an excellent level according to the kappa index. The proportion of crops stratified by municipalities was derived by a field work during the crop growing season. These proportion coefficients were applied onto the area of land to be planted with summer crops (derived from Landsat data). Thus, it was possible to derive the area of each summer crop by the municipality. The discrepancies between official statistics and our results were attributed to the sampling and the stratification procedures. Nevertheless, this methodology can be improved in order to provide good crop area estimates using remote sensing data, despite the cloud cover during the growing season.

Keywords: area intended for summer culture, estimated area planted, agriculture, Landsat, planting schedule

Procedia PDF Downloads 120
275 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 120
274 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 329
273 Exploring Ugliness as an Aesthetic Theme in Contemporary Chinese Literature through Analyzing Five Dragons, Protagonist in Rice by Xianfeng Writer Su Tong

Authors: Ku Yu Yiu

Abstract:

Writers have included the ugly in their works for centuries, but ugliness has often served merely as a contrast to bring out the beautiful, not having emerged as an independent aesthetic category until recent history. In the 1980s, China was going through a series of changes and transformations; the wounds and scars from the Cultural Revolution, a freer literary atmosphere then, and the introduction of Western thoughts into China gave rise to a trend of penning the ugly and the repulsive among writers. Such trend of utilizing 'Ugliness' as a theme of writing in Chinese literature is especially observed among Xianfeng writers (China’s pioneer writers or avant-garde writers). As a prominent Xianfeng writer, Su Tong (1963-) also incorporates ugliness into his novels: shoddy environment, degenerate and ruthless society, distorted and decadent humanity are part and parcel of his deliberate efforts of exploring and depicting the ugly aspects of the world. His full-length novel Rice, staging the appalling protagonist Five Dragons, is a prime example. In fact, all characters in Rice exhibit Ugliness but Five Dragons’s turning into a figure of ugly spite is the most thorough and complete, making Rice a masterpiece of Su Tong’s art in projecting the Ugliness embedded in society and human nature. Approaching Rice from the angle of the aesthetics of the Ugly and selecting Five Dragons as the subject of close reading and analysis, this paper offers insights into both Su Tong’s distinct style of foregrounding and unfolding Ugliness in his novel and the workings of such text when he deploys the Ugly as a center component of his writing. In addition to citing from the discussion of Rice by literary critics and the author himself, this paper also presents textual evidence and analyzes the imageries/motifs and calculated vocabulary/narration employed by Su Tong to illustrate how Five Dragons' extreme behaviors and psychological states are integral to the plot and ultimately to the manifestation of ugliness as the novel’s theme. This study reveals that although the psyche and doings of Five Dragons and other 'ugly' characters are, as the author once stated, imagined products of the writer Su Tong himself, Rice sheds light onto the ugly aspects of life in China in 1920s-30s. Three aspects of Ugliness are identified and discussed in the paper. Lastly, this paper also suggests some effects of Su Tong’s exploration of Ugliness in Rice, proposing that the portrayal of Ugliness per se is not the ends of Su Tong’s mastery of the aesthetics of the Ugly but rather a means to making his writing transcend from provoking spontaneous moral judgment in readers on the doings of Five Dragons to prompting readers to ponder on philosophical questions such as how humanity can still be possible when an individual confronts the dark sides of a self, a society, and his/her fate.

Keywords: aesthetics, Rice, Su Tong, Ugly

Procedia PDF Downloads 151
272 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 44
271 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen

Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev

Abstract:

The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).

Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms

Procedia PDF Downloads 67
270 AI Predictive Modeling of Excited State Dynamics in OPV Materials

Authors: Pranav Gunhal., Krish Jhurani

Abstract:

This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.

Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling

Procedia PDF Downloads 82