Search results for: linear density of reinforcement
93 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 5092 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements
Authors: Miria Gambardella
Abstract:
The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army
Procedia PDF Downloads 8891 A Geospatial Approach to Coastal Vulnerability Using Satellite Imagery and Coastal Vulnerability Index: A Case Study Mauritius
Authors: Manta Nowbuth, Marie Anais Kimberley Therese
Abstract:
The vulnerability of coastal areas to storm surges stands as a critical global concern. The increasing frequency and intensity of extreme weather events have increased the risks faced by communities living along the coastlines Worldwide. Small Island developing states (SIDS) stands out as being exceptionally vulnerable, coastal regions, ecosystems of human habitation and natural forces, bear witness to the frontlines of climate-induced challenges, and the intensification of storm surges underscores the urgent need for a comprehensive understanding of coastal vulnerability. With limited landmass, low-lying terrains, and resilience on coastal resources, SIDS face an amplified vulnerability to the consequences of storm surges, the delicate balance between human activities and environmental dynamics in these island nations increases the urgency of tailored strategies for assessing and mitigating coastal vulnerability. This research uses an approach to evaluate the vulnerability of coastal communities in Mauritius. The Satellite imagery analysis makes use of sentinel satellite imageries, modified normalised difference water index, classification techniques and the DSAS add on to quantify the extent of shoreline erosion or accumulation, providing a spatial perspective on coastal vulnerability. The coastal Vulnerability Index (CVI) is applied by Gonitz et al Formula, this index considers factors such as coastal slope, sea level rise, mean significant wave height, and tidal range. Weighted assessments identify regions with varying levels of vulnerability, ranging from low to high. The study was carried out in a Village Located in the south of Mauritius, namely Rivière des Galets, with a population of about 500 people over an area of 60,000m². The Village of Rivière des Galets being located in the south, and the southern coast of Mauritius being exposed to the open Indian ocean, is vulnerable to swells, The swells generated by the South east trade winds can lead to large waves and rough sea conditions along the Southern Coastline which has an impact on the coastal activities, including fishing, tourism and coastal Infrastructures, hence, On the one hand, the results highlighted that from a stretch of 123km of coastline the linear rate regression for the 5 –year span varies from-24.1m/yr. to 8.2m/yr., the maximum rate of change in terms of eroded land is -24m/yr. and the maximum rate of accretion is 8.2m/yr. On the other hand, the coastal vulnerability index varies from 9.1 to 45.6 and it was categorised into low, moderate, high and very high risks zones. It has been observed that region which lacks protective barriers and are made of sandy beaches are categorised as high risks zone and hence it is imperative to high risk regions for immediate attention and intervention, as they will most likely be exposed to coastal hazards and impacts from climate change, which demands proactive measures for enhanced resilience and sustainable adaptation strategies.Keywords: climate change, coastal vulnerability, disaster management, remote sensing, satellite imagery, storm surge
Procedia PDF Downloads 1390 Bio-Hub Ecosystems: Profitability through Circularity for Sustainable Forestry, Energy, Agriculture and Aquaculture
Authors: Kimberly Samaha
Abstract:
The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding biomass as a feedstock for power plants. Yet the lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. This study analyzed data and submittals to the Born Global Maine Innovation Challenge. The Innovation Challenge was a global innovation challenge to identify process innovations that could address a ‘whole-tree’ approach of maximizing the products, byproducts, energy value and process slip-streams into a circular zero-waste design. Participating companies were at various stages of developing bioproducts and included biofuels, lignin-based products, carbon capture platforms and biochar used as both a filtration medium and as a soil amendment product. This case study shows the QCA (Qualitative Comparative Analysis) methodology of the prequalification process and the resulting techno-economic model that was developed for the maximizing profitability of the Bio-Hub Ecosystem through continuous expansion of system waste streams into valuable process inputs for co-hosts. A full site plan for the integration of co-hosts (biorefinery, land-based shrimp and salmon aquaculture farms, a tomato green-house and a hops farm) at an operating forestry-based biomass to energy plant in West Enfield, Maine USA. This model and process for evaluating the profitability not only proposes models for integration of forestry, aquaculture and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. In this particular study, profitability is assessed at two levels CAPEX (Capital Expenditures) and in OPEX (Operating Expenditures). Given that these projects start with repurposing facilities where the industrial level infrastructure is already built, permitted and interconnected to the grid, the addition of co-hosts first realizes a dramatic reduction in permitting, development times and costs. In addition, using the biomass energy plant’s waste streams such as heat, hot water, CO₂ and fly ash as valuable inputs to their operations and a significant decrease in the OPEX costs, increasing overall profitability to each of the co-hosts bottom line. This case study utilizes a proprietary techno-economic model to demonstrate how utilizing waste streams of a biomass energy plant and/or biorefinery, results in significant reduction in OPEX for both the biomass plants and the agriculture and aquaculture co-hosts. Economically viable Bio-Hubs with favorable environmental and community impacts may prove critical in garnering local and federal government support for pilot programs and more wide-scale adoption, especially for those living in severely economically depressed rural areas where aging industrial sites have been shuttered and local economies devastated.Keywords: bio-economy, biomass energy, financing, zero-waste
Procedia PDF Downloads 13489 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method
Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez
Abstract:
Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics
Procedia PDF Downloads 9588 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 7187 Bio-Hub Ecosystems: Expansion of Traditional Life Cycle Analysis Metrics to Include Zero-Waste Circularity Measures
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, a new set of metrics and measurement system is needed to better quantify the environmental, social and economic impacts of circular zero-waste design. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. Lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. In particular, the forestry-based plants which have been an invaluable outlet for woody biomass surplus, forest health improvement, timber production enhancement, and especially reduction of wildfire risk. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. It proposes not only models for integration of forestry, aquaculture, and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. Typically, life cycle analyses measure environmental impacts of different industrial production stages and are not integrated with indicators of material use circularity. This concept paper proposes the further development of a new set of metrics that would illustrate not only the typical life-cycle analysis (LCA), which shows the reduction in greenhouse gas (GHG) emissions, but also the zero-waste circularity measures of mass balance of the full value chain of the raw material and energy content/caloric value. These new measures quantify key impacts in making hyper-efficient use of natural resources and eliminating waste to landfills. The project utilized traditional LCA using the GREET model where the standalone biomass energy plant case was contrasted with the integration of a jet-fuel biorefinery. The methodology was then expanded to include combinations of co-hosts that optimize the life cycle of woody biomass from tree to energy, CO₂, heat and wood ash both from an energy/caloric value and for mass balance to include reuse of waste streams which are typically landfilled. The major findings of both a formal LCA study resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. If proven as a model, the expedited roll-out of these innovative scenarios can set a new standard for circular zero-waste projects that advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable bio-economy paradigm where waste streams become valuable inputs, supporting local and rural communities in simple, sustainable ways.Keywords: bio-economy, biomass energy, financing, metrics
Procedia PDF Downloads 15886 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing
Authors: Leonie Bradfield, Stephen Fityus, John Simmons
Abstract:
The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump
Procedia PDF Downloads 16285 Polysaccharide Polyelectrolyte Complexation: An Engineering Strategy for the Development of Commercially Viable Sustainable Materials
Authors: Jeffrey M. Catchmark, Parisa Nazema, Caini Chen, Wei-Shu Lin
Abstract:
Sustainable and environmentally compatible materials are needed for a wide variety of volume commercial applications. Current synthetic materials such as plastics, fluorochemicals (such as PFAS), adhesives and resins in form of sheets, laminates, coatings, foams, fibers, molded parts and composites are used for countless products such as packaging, food handling, textiles, biomedical, construction, automotive and general consumer devices. Synthetic materials offer distinct performance advantages including stability, durability and low cost. These attributes are associated with the physical and chemical properties of these materials that, once formed, can be resistant to water, oils, solvents, harsh chemicals, salt, temperature, impact, wear and microbial degradation. These advantages become disadvantages when considering the end of life of these products which generate significant land and water pollution when disposed of and few are recycled. Agriculturally and biologically derived polymers offer the potential of remediating these environmental and life-cycle difficulties, but face numerous challenges including feedstock supply, scalability, performance and cost. Such polymers include microbial biopolymers like polyhydroxyalkanoates and polyhydroxbutirate; polymers produced using biomonomer chemical synthesis like polylactic acid; proteins like soy, collagen and casein; lipids like waxes; and polysaccharides like cellulose and starch. Although these materials, and combinations thereof, exhibit the potential for meeting some of the performance needs of various commercial applications, only cellulose and starch have both the production feedstock volume and cost to compete with petroleum derived materials. Over 430 million tons of plastic is produced each year and plastics like low density polyethylene cost ~$1500 to $1800 per ton. Over 400 million tons of cellulose and over 100 million tons of starch are produced each year at a volume cost as low as ~$500 to $1000 per ton with the capability of increased production. Cellulose and starches, however, are hydroscopic materials that do not exhibit the needed performance in most applications. Celluloses and starches can be chemically modified to contain positive and negative surface charges and such modified versions of these are used in papermaking, foods and cosmetics. Although these modified polysaccharides exhibit the same performance limitations, recent research has shown that composite materials comprised of cationic and anionic polysaccharides in polyelectrolyte complexation exhibit significantly improved performance including stability in diverse environments. Moreover, starches with added plasticizers can exhibit thermoplasticity, presenting the possibility of improved thermoplastic starches when comprised of starches in polyelectrolyte complexation. In this work, the potential for numerous volume commercial products based on polysaccharide polyelectrolyte complexes (PPCs) will be discussed, including the engineering design strategy used to develop them. Research results will be detailed including the development and demonstration of starch PPC compositions for paper coatings to replace PFAS; adhesives; foams for packaging, insulation and biomedical applications; and thermoplastic starches. In addition, efforts to demonstrate the potential for volume manufacturing with industrial partners will be discussed.Keywords: biomaterials engineering, commercial materials, polysaccharides, sustainable materials
Procedia PDF Downloads 1884 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 18383 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 6882 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 9381 Rheological and Microstructural Characterization of Concentrated Emulsions Prepared by Fish Gelatin
Authors: Helen S. Joyner (Melito), Mohammad Anvari
Abstract:
Concentrated emulsions stabilized by proteins are systems of great importance in food, pharmaceutical and cosmetic products. Controlling emulsion rheology is critical for ensuring desired properties during formation, storage, and consumption of emulsion-based products. Studies on concentrated emulsions have focused on rheology of monodispersed systems. However, emulsions used for industrial applications are polydispersed in nature, and this polydispersity is regarded as an important parameter that also governs the rheology of the concentrated emulsions. Therefore, the objective of this study was to characterize rheological (small and large deformation behaviors) and microstructural properties of concentrated emulsions which were not truly monodispersed as usually encountered in food products such as margarines, mayonnaise, creams, spreads, and etc. The concentrated emulsions were prepared at different concentrations of fish gelatin (0.2, 0.4, 0.8% w/v in the whole emulsion system), oil-water ratio 80-20 (w/w), homogenization speed 10000 rpm, and 25oC. Confocal laser scanning microscopy (CLSM) was used to determine the microstructure of the emulsions. To prepare samples for CLSM analysis, FG solutions were stained by Fluorescein isothiocyanate dye. Emulsion viscosity profiles were determined using shear rate sweeps (0.01 to 100 1/s). The linear viscoelastic regions (LVRs) of the emulsions were determined using strain sweeps (0.01 to 100% strain) for each sample. Frequency sweeps were performed in the LVR (0.1% strain) from 0.6 to 100 rad/s. Large amplitude oscillatory shear (LAOS) testing was conducted by collecting raw waveform data at 0.05, 1, 10, and 100% strain at 4 different frequencies (0.5, 1, 10, and 100 rad/s). All measurements were performed in triplicate at 25oC. The CLSM results revealed that increased fish gelatin concentration resulted in more stable oil-in-water emulsions with homogeneous, finely dispersed oil droplets. Furthermore, the protein concentration had a significant effect on emulsion rheological properties. Apparent viscosity and dynamic moduli at small deformations increased with increasing fish gelatin concentration. These results were related to increased inter-droplet network connections caused by increased fish gelatin adsorption at the surface of oil droplets. Nevertheless, all samples showed shear-thinning and weak gel behaviors over shear rate and frequency sweeps, respectively. Lissajous plots, or plots of stress versus strain, and phase lag values were used to determine nonlinear behavior of the emulsions in LAOS testing. Greater distortion in the elliptical shape of the plots followed by higher phase lag values was observed at large strains and frequencies in all samples, indicating increased nonlinear behavior. Shifts from elastic-dominated to viscous dominated behavior were also observed. These shifts were attributed to damage to the sample microstructure (e.g. gel network disruption), which would lead to viscous-type behaviors such as permanent deformation and flow. Unlike the small deformation results, the LAOS behavior of the concentrated emulsions was not dependent on fish gelatin concentration. Systems with different microstructures showed similar nonlinear viscoelastic behaviors. The results of this study provided valuable information that can be used to incorporate concentrated emulsions in emulsion-based food formulations.Keywords: concentrated emulsion, fish gelatin, microstructure, rheology
Procedia PDF Downloads 27680 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study
Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier
Abstract:
An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house
Procedia PDF Downloads 41779 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 12578 Optical-Based Lane-Assist System for Rowing Boats
Authors: Stephen Tullis, M. David DiDonato, Hong Sung Park
Abstract:
Rowing boats (shells) are often steered by a small rudder operated by one of the backward-facing rowers; the attention required of that athlete then slightly decreases the power that that athlete can provide. Reducing the steering distraction would then increase the overall boat speed. Races are straight 2000 m courses with each boat in a 13.5 m wide lane marked by small (~15 cm) widely-spaced (~10 m) buoys, and the boat trajectory is affected by both cross-currents and winds. An optical buoy recognition and tracking system has been developed that provides the boat’s location and orientation with respect to the lane edges. This information is provided to the steering athlete as either: a simple overlay on a video display, or fed to a simplified autopilot system giving steering directions to the athlete or directly controlling the rudder. The system is then effectively a “lane-assist” device but with small, widely-spaced lane markers viewed from a very shallow angle due to constraints on camera height. The image is captured with a lightweight 1080p webcam, and most of the image analysis is done in OpenCV. The colour RGB-image is converted to a grayscale using the difference of the red and blue channels, which provides good contrast between the red/yellow buoys and the water, sky, land background and white reflections and noise. Buoy detection is done with thresholding within a tight mask applied to the image. Robust linear regression using Tukey’s biweight estimator of the previously detected buoy locations is used to develop the mask; this avoids the false detection of noise such as waves (reflections) and, in particular, buoys in other lanes. The robust regression also provides the current lane edges in the camera frame that are used to calculate the displacement of the boat from the lane centre (lane location), and its yaw angle. The interception of the detected lane edges provides a lane vanishing point, and yaw angle can be calculated simply based on the displacement of this vanishing point from the camera axis and the image plane distance. Lane location is simply based on the lateral displacement of the vanishing point from any horizontal cut through the lane edges. The boat lane position and yaw are currently fed what is essentially a stripped down marine auto-pilot system. Currently, only the lane location is used in a PID controller of a rudder actuator with integrator anti-windup to deal with saturation of the rudder angle. Low Kp and Kd values decrease unnecessarily fast return to lane centrelines and response to noise, and limiters can be used to avoid lane departure and disqualification. Yaw is not used as a control input, as cross-winds and currents can cause a straight course with considerable yaw or crab angle. Mapping of the controller with rudder angle “overall effectiveness” has not been finalized - very large rudder angles stall and have decreased turning moments, but at less extreme angles the increased rudder drag slows the boat and upsets boat balance. The full system has many features similar to automotive lane-assist systems, but with the added constraints of the lane markers, camera positioning, control response and noise increasing the challenge.Keywords: auto-pilot, lane-assist, marine, optical, rowing
Procedia PDF Downloads 13277 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 34276 Effect of Climate Change on Rainfall Induced Failures for Embankment Slopes in Timor-Leste
Authors: Kuo Chieh Chao, Thishani Amarathunga, Sangam Shrestha
Abstract:
Rainfall induced slope failures are one of the most damaging and disastrous natural hazards which occur frequently in the world. This type of sliding mainly occurs in the zone above the groundwater level in silty/sandy soils. When the rainwater begins to infiltrate into the vadose zone of the soil, the negative pore-water pressure tends to decrease and reduce the shear strength of soil material. Climate change has resulted in excessive and unpredictable rainfall in all around the world, resulting in landslides with dire consequences to human lives and infrastructure. Such problems could be overcome by examining in detail the causes for such slope failures and recommending effective repair plans for vulnerable locations by considering future climatic change. The selected area for this study is located in the road rehabilitation section from Maubara to Mota Ain road in Timor-Leste. Slope failures and cracks have occurred in 2013 and after repairs reoccurred again in 2017 subsequent to heavy rains. Both observed and future predicted climate data analyses were conducted to understand the severe precipitation conditions in past and future. Observed climate data were collected from NOAA global climate data portal. CORDEX data portal was used to collect Regional Climate Model (RCM) future predicted climate data. Both observed and RCM data were extracted to location-based data using ArcGIS Software. Linear scaling method was used for the bias correction of future data and bias corrected climate data were assigned to GeoStudio Software. Precipitations of wet seasons (December to March ) in 2007 to 2013 is higher than 2001-2006 period and it is more than nearly 40% higher precipitation than usual monthly average precipitation of 160mm.The results of seepage analyses which were carried out using SEEP/W model with observed climate, clearly demonstrated that the pore water pressure within the fill slope was significantly increased due to the increase of the infiltration during the wet season of 2013.One main Regional Climate Models (RCM) was analyzed in order to predict future climate variation under two Representative Concentration Pathways (RCPs).In the projected period of 76 years ahead from 2014, shows that the amount of precipitation is considerably getting higher in the future in both RCP 4.5 and RCP 8.5 emission scenarios. Critical pore water pressure conditions during 2014-2090 were used in order to recommend appropriate remediation methods. Results of slope stability analyses indicated that the factor of safety of the fill slopes was reduced from 1.226 to 0.793 during the dry season to wet season in 2013.Results of future slope stability which were obtained using SLOPE/W model for the RCP emissions scenarios depict that, the use of tieback anchors and geogrids in slope protection could be effective in increasing the stability of slopes to an acceptable level during the wet seasons. Moreover, methods and procedures like monitoring of slopes showing signs or susceptible for movement and installing surface protections could be used to increase the stability of slopes.Keywords: climate change, precipitation, SEEP/W, SLOPE/W, unsaturated soil
Procedia PDF Downloads 13675 The Impact of Riparian Alien Plant Removal on Aquatic Invertebrate Communities in the Upper Reaches of Luvuvhu River Catchment, Limpopo Province
Authors: Rifilwe Victor Modiba, Stefan Hendric Foord
Abstract:
Alien invasive plants (IAP’s) have considerable negative impacts on freshwater habitats and South Africa has implemented an innovative Work for Water (WfW) programme for the systematic removal of these plants aimed at, amongst other objectives, restoring biodiversity and ecosystem services in these threatened habitats. These restoration processes are expensive and have to be evidence-based. In this study in-stream macroinvertebrate and adult Odonata assemblages were used as indicators of restoration success by quantifying the response of biodiversity metrics for these two groups to the removal of IAP’s in a strategic water resource of South Africa that is extensively invaded by invasive alien plants (IAP’s). The study consisted of a replicated design that included 45 sampling units, viz. 15 invaded, 15 uninvaded and 15 cleared sites stratified across the upper reaches of six sub-catchments of the Luvuvhu river catchment, Limpopo Province. Cleared sites were only considered if they received at least two WfW treatments in the last 3 years. The Benthic macroinvertebrate and adult Odonate assemblages in each of these sampling were surveyed from between November and March, 2013/2014 and 2014/2015 respectively. Generalized Linear Models (GLM) with a log link function and Poisson error distribution were done for metrics (invaded, cleared, and uninvaded) whose residuals were not normally distributed or had unequal variance and for abundance. RDA was done for EPTO genera (Ephemeroptera, Plecoptera, Trichoptera and Odonata) and adult Odonata species abundance. GLM was done to for the abundance of Genera and Odonates that had the association with the RDA environmental factors. Sixty four benthic macroinvertebrate families, 57 EPTO genera, and 45 adult Odonata species were recorded across all 45 sampling units. There was no significant difference between the SASS5 total score, ASPT, and family richness of the three invasion classes. Although clearing only had a weak positive effect on the adult Odonate species richness it had a positive impact on DBI scores. These differences were mainly the result of significantly larger DBI scores in the cleared sites as compared to the invaded sites. Results suggest that water quality is positively impacted by repeated clearing pointing to the importance of follow up procedures after initial clearing. Adult Odonate diversity as measured by richness, endemicity, threat and distribution respond positively to all forms of the clearing. The clearing had a significant impact on Odonate assemblage structure but did not affect EPTO structure. Variation partitioning showed that 21.8% of the variation in EPTO assemblage can be explained by spatial and environmental variables, 16% of the variation in Odonate structure was explained by spatial and environmental variables. The response of the diversity metrics to clearing increased in significance at finer taxonomic resolutions, particularly of adult Odonates whose metrics significantly improved with clearing and whose structure responded to both invasion and clearing. The study recommends the use of DBI for surveying river health when hydraulic biotopes are poor.Keywords: DBI, evidence-based conservation, EPTO, macroinvetebrates
Procedia PDF Downloads 18674 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13873 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 13372 Coil-Over Shock Absorbers Compared to Inherent Material Damping
Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major
Abstract:
Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.Keywords: damper structures, material damping, PDMS, TPU
Procedia PDF Downloads 11571 Optimized Processing of Neural Sensory Information with Unwanted Artifacts
Authors: John Lachapelle
Abstract:
Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors
Procedia PDF Downloads 33070 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease
Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang
Abstract:
Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation
Procedia PDF Downloads 7669 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves
Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare
Abstract:
The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve
Procedia PDF Downloads 4568 An Investigation on the Suitability of Dual Ion Beam Sputtered GMZO Thin Films: For All Sputtered Buffer-Less Solar Cells
Authors: Vivek Garg, Brajendra S. Sengar, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shailendra Kumar, Shaibal Mukherjee
Abstract:
CuInGaSe (CIGSe) is the dominant thin film solar cell technology. The band alignment of Buffer/CIGSe interface is one of the most crucial parameters for solar cell performance. In this article, the valence band offset (VBOff) and conduction band offset (CBOff) values of Cu(In0.70Ga0.30)Se/ 1 at.% Ga: Mg0.25Zn0.75O (GMZO) heterojunction, grown by dual ion beam sputtering system (DIBS), are calculated to understand the carrier transport mechanism at the heterojunction for the realization of all sputtered buffer-less solar cells. To determine the valence band offset (VBOff), ∆E_V at GMZO/CIGSe heterojunction interface, the standard method based on core-level photoemission is utilized. The value of ∆E_V can be evaluated by considering common core-level peaks. In our study, the values of (Valence band onset)VBOn, obtained by linear extrapolation method for GMZO and CIGSe films are calculated to be 2.86 and 0.76 eV. In the UPS spectra peak positions of Se 3d is observed in UPS spectra at 54.82 and 54.7 eV for CIGSe film and GMZO/CIGSe interface respectively, while the peak position of Mg 2p is observed at 50.09 and 50.12 eV for GMZO and GMZO/CIGSe interface respectively. The optical band gap of CIGSe and GMZO are obtained from absorption spectra procured from spectroscopic ellipsometry are 1.26 and 3.84 eV respectively. The calculated average values of ∆E_v and ∆E_C are estimated to be 2.37 and 0.21 eV, respectively, at room temperature. The calculated positive conduction band offset termed as a spike at the absorber junction is the required criterion for the high-efficiency solar cells for the efficient charge extraction from the junction. So we can conclude that the above study confirms GMZO thin films grown by the dual ion beam sputtering system are the suitable candidate for the CIGSe thin films based ultra-thin buffer-less solar cells. We investigated the band-offset properties at the GMZO/CIGSe heterojunction to verify the suitability of the GMZO for the realization of the buffer-less solar cells. The calculated average values of ∆E_V and ∆E_C are estimated to be 2.37 and 0.21 eV, respectively, at room temperature. The calculated positive conduction band offset termed as a spike at the absorber junction is the required criterion for the high-efficiency solar cells for the efficient charge extraction from the junction. So we can conclude that the above study confirms GMZO thin films grown by the dual ion beam sputtering system are the suitable candidate for the CIGSe thin films based ultra-thin buffer-less solar cells. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B.S.S and A.K acknowledge CSIR and V.G acknowledge UGC, India for their fellowships. B.S.S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.Keywords: CIGSe, DIBS, GMZO, solar cells, UPS
Procedia PDF Downloads 27967 Production Factor Coefficients Transition through the Lens of State Space Model
Authors: Kanokwan Chancharoenchai
Abstract:
Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.Keywords: autoregressive model, economic growth, state space model, Thailand
Procedia PDF Downloads 15166 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study
Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane
Abstract:
Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms
Procedia PDF Downloads 12765 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6764 Expanded Polyurethane Foams and Waterborne-Polyurethanes from Vegetable Oils
Authors: A.Cifarelli, L. Boggioni, F. Bertini, L. Magon, M. Pitalieri, S. Losio
Abstract:
Nowadays, the growing environmental awareness and the dwindling of fossil resources stimulate the polyurethane (PU) industry towards renewable polymers with low carbon footprint to replace the feed stocks from petroleum sources. The main challenge in this field consists in replacing high-performance products from fossil-fuel with novel synthetic polymers derived from 'green monomers'. The bio-polyols from plant oils have attracted significant industrial interest and major attention in scientific research due to their availability and biodegradability. Triglycerides rich in unsaturated fatty acids, such as soybean oil (SBO) and linseed oil (ELO), are particularly interesting because their structures and functionalities are tunable by chemical modification in order to obtain polymeric materials with expected final properties. Unfortunately, their use is still limited for processing or performance problems because a high functionality, as well as OH number of the polyols will result in an increase in cross-linking densities of the resulting PUs. The main aim of this study is to evaluate soy and linseed-based polyols as precursors to prepare prepolymers for the production of polyurethane foams (PUFs) or waterborne-polyurethanes (WPU) used as coatings. An effective reaction route is employed for its simplicity and economic impact. Indeed, bio-polyols were synthesized by a two-step method: epoxidation of the double bonds in vegetable oils and solvent-free ring-opening reaction of the oxirane with organic acids. No organic solvents have been used. Acids with different moieties (aliphatic or aromatics) and different length of hydrocarbon backbones can be used to customize polyols with different functionalities. The ring-opening reaction requires a fine tuning of the experimental conditions (time, temperature, molar ratio of carboxylic acid and epoxy group) to control the acidity value of end-product as well as the amount of residual starting materials. Besides, a Lewis base catalyst is used to favor the ring opening reaction of internal epoxy groups of the epoxidized oil and minimize the formation of cross-linked structures in order to achieve less viscous and more processable polyols with narrower polydispersity indices (molecular weight lower than 2000 g/mol⁻¹). The functionality of optimized polyols is tuned from 2 to 4 per molecule. The obtained polyols are characterized by means of GPC, NMR (¹H, ¹³C) and FT-IR spectroscopy to evaluate molecular masses, molecular mass distributions, microstructures and linkage pathways. Several polyurethane foams have been prepared by prepolymer method blending conventional synthetic polyols with new bio-polyols from soybean and linseed oils without using organic solvents. The compatibility of such bio-polyols with commercial polyols and diisocyanates is demonstrated. The influence of the bio-polyols on the foam morphology (cellular structure, interconnectivity), density, mechanical and thermal properties has been studied. Moreover, bio-based WPUs have been synthesized by well-established processing technology. In this synthesis, a portion of commercial polyols is substituted by the new bio-polyols and the properties of the coatings on leather substrates have been evaluated to determine coating hardness, abrasion resistance, impact resistance, gloss, chemical resistance, flammability, durability, and adhesive strength.Keywords: bio-polyols, polyurethane foams, solvent free synthesis, waterborne-polyurethanes
Procedia PDF Downloads 132