Search results for: permanent magnet machines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1274

Search results for: permanent magnet machines

224 Finding a Redefinition of the Relationship between Rural and Urban Knowledge

Authors: Bianca Maria Rulli, Lenny Valentino Schiaretti

Abstract:

The considerable recent urbanization has increasingly sharpened environmental and social problems all over the world. During the recent years, many answers to the alarming attitudes in modern cities have emerged: a drastic reduction in the rate of growth is becoming essential for future generations and small scale economies are considered more adaptive and sustainable. According to the concept of degrowth, cities should consider surpassing the centralization of urban living by redefining the relationship between rural and urban knowledge; growing food in cities fundamentally contributes to the increase of social and ecological resilience. Through an innovative approach, this research combines the benefits of urban agriculture (increase of biological diversity, shorter and thus more efficient supply chains, food security) and temporary land use. They stimulate collaborative practices to satisfy the changing needs of communities and stakeholders. The concept proposes a coherent strategy to create a sustainable development of urban spaces, introducing a productive green-network to link specific areas in the city. By shifting the current relationship between architecture and landscape, the former process of ground consumption is deeply revised. Temporary modules can be used as concrete tools to create temporal areas of innovation, transforming vacant or marginal spaces into potential laboratories for the development of the city. The only permanent ground traces, such as foundations, are minimized in order to allow future land re-use. The aim is to describe a new mindset regarding the quality of space in the metropolis which allows, in a completely flexible way, to bring back the green and the urban farming into the cities. The wide possibilities of the research are analyzed in two different case-studies. The first is a regeneration/connection project designated for social housing, the second concerns the use of temporary modules to answer to the potential needs of social structures. The intention of the productive green-network is to link the different vacant spaces to each other as well as to the entire urban fabric. This also generates a potential improvement of the current situation of underprivileged and disadvantaged persons.

Keywords: degrowth, green network, land use, temporary building, urban farming

Procedia PDF Downloads 482
223 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach

Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy

Abstract:

Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.

Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles

Procedia PDF Downloads 22
222 Adaption of the Design Thinking Method for Production Planning in the Meat Industry Using Machine Learning Algorithms

Authors: Alica Höpken, Hergen Pargmann

Abstract:

The resource-efficient planning of the complex production planning processes in the meat industry and the reduction of food waste is a permanent challenge. The complexity of the production planning process occurs in every part of the supply chain, from agriculture to the end consumer. It arises from long and uncertain planning phases. Uncertainties such as stochastic yields, fluctuations in demand, and resource variability are part of this process. In the meat industry, waste mainly relates to incorrect storage, technical causes in production, or overproduction. The high amount of food waste along the complex supply chain in the meat industry could not be reduced by simple solutions until now. Therefore, resource-efficient production planning by conventional methods is currently only partially feasible. The realization of intelligent, automated production planning is basically possible through the application of machine learning algorithms, such as those of reinforcement learning. By applying the adapted design thinking method, machine learning methods (especially reinforcement learning algorithms) are used for the complex production planning process in the meat industry. This method represents a concretization to the application area. A resource-efficient production planning process is made available by adapting the design thinking method. In addition, the complex processes can be planned efficiently by using this method, since this standardized approach offers new possibilities in order to challenge the complexity and the high time consumption. It represents a tool to support the efficient production planning in the meat industry. This paper shows an elegant adaption of the design thinking method to apply the reinforcement learning method for a resource-efficient production planning process in the meat industry. Following, the steps that are necessary to introduce machine learning algorithms into the production planning of the food industry are determined. This is achieved based on a case study which is part of the research project ”REIF - Resource Efficient, Economic and Intelligent Food Chain” supported by the German Federal Ministry for Economic Affairs and Climate Action of Germany and the German Aerospace Center. Through this structured approach, significantly better planning results are achieved, which would be too complex or very time consuming using conventional methods.

Keywords: change management, design thinking method, machine learning, meat industry, reinforcement learning, resource-efficient production planning

Procedia PDF Downloads 106
221 Simulation Based Analysis of Gear Dynamic Behavior in Presence of Multiple Cracks

Authors: Ahmed Saeed, Sadok Sassi, Mohammad Roshun

Abstract:

Gears are important components with a vital role in many rotating machines. One of the common gear failure causes is tooth fatigue crack; however, its early detection is still a challenging task. The objective of this study is to develop a numerical model that simulates the effect of teeth cracks on the resulting gears vibrations and permits consequently to perform an early fault detection. In contrast to other published papers, this work incorporates the possibility of multiple simultaneous cracks with different depths. As cracks alter significantly the stiffness of the tooth, finite element software is used to determine the stiffness variation with respect to the angular position, for different combinations of crack orientation and depth. A simplified six degrees of freedom nonlinear lumped parameter model of a one-stage spur gear system is proposed to study the vibration with and without cracks. The model developed for calculating the stiffness with the crack permitted to update the physical parameters of the second-degree-of-freedom equations of motions describing the vibration of the gearbox. The vibration simulation results of the gearbox were by obtained using Simulink/Matlab. The effect of one crack with different levels was studied thoroughly. The change in the mesh stiffness and the vibration response were found to be consistent with previously published works. In addition, various statistical time domain parameters were considered. They showed different degrees of sensitivity toward the crack depth. Multiple cracks were also introduced at different locations and the vibration response along with the statistical parameters were obtained again for a general case of degradation (increase in crack depth, crack number and crack locations). It was found that although some parameters increase in value as the deterioration level increases, they show almost no change or even decrease when the number of cracks increases. Therefore, the use of any statistical parameters could be misleading if not considered in an appropriate way.

Keywords: Spur gear, cracked tooth, numerical simulation, time-domain parameters

Procedia PDF Downloads 248
220 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 80
219 Role of Imaging in Alzheimer's Disease Trials: Impact on Trial Planning, Patient Recruitment and Retention

Authors: Kohkan Shamsi

Abstract:

Background: MRI and PET are now extensively utilized in Alzheimer's disease (AD) trials for patient eligibility, efficacy assessment, and safety evaluations but including imaging in AD trials impacts site selection process, patient recruitment, and patient retention. Methods: PET/MRI are performed at baseline and at multiple follow-up timepoints. This requires prospective site imaging qualification, evaluation of phantom data, training and continuous monitoring of machines for acquisition of standardized and consistent data. This also requires prospective patient/caregiver training as patients must go to multiple facilities for imaging examinations. We will share our experience form one of the largest AD programs. Lesson learned: Many neurological diseases have a similar presentation as AD or could confound the assessment of drug therapy. The inclusion of wrong patients has ethical and legal issues, and data could be excluded from the analysis. Centralized eligibility evaluation read process will be discussed. Amyloid related imaging abnormalities (ARIA) were observed in amyloid-β trials. FDA recommended regular monitoring of ARIA. Our experience in ARIA evaluations in large phase III study at > 350 sites will be presented. Efficacy evaluation: MRI is utilized to evaluate various volumes of the brain. FDG PET or amyloid PET agents has been used in AD trials. We will share our experience about site and central independent reads. Imaging logistic issues that need to be handled in the planning phase will also be discussed as it can impact patient compliance thereby increasing missing data and affecting study results. Conclusion: imaging must be prospectively planned to include standardizing imaging methodologies, site selection process and selecting assessment criteria. Training should be transparently conducted and documented. Prospective patient/caregiver awareness of imaging requirement is essential for patient compliance and reduction in missing imaging data.

Keywords: Alzheimer's disease, ARIA, MRI, PET, patient recruitment, retention

Procedia PDF Downloads 100
218 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 69
217 Residencial Inclusion Strategies for Homeless Immigrants: The Case of Spain

Authors: Raluca Cosmina Budian

Abstract:

The homeless population in Spain, particularly among immigrants, has been a persistent and multifaceted issue. The government has implemented various housing public policies over the years to address homelessness, ranging from shelter programs to initiatives promoting permanent housing solutions. However, understanding the effectiveness of these policies requires insight from the very individuals and professionals directly impacted by or involved in their execution. This research sheds light on national strategies (The 2015-2020 Comprehensive National Strategy for the Homeless and National Strategy to Combat Homelessness in Spain 2023-2030) aimed at tackling homelessness in Spain, with a focus on the evolving landscape of housing public policies and their relationship with the homeless population. We investigate how these strategies have transformed over time and their impact on the inclusion of this vulnerable group. Furthermore, we explore the perspectives of homeless immigrants, distinguishing between those with an extended residency in Spain and those who have more recently arrived (less than 2 years); and distinguishing between women and men. Additionally, we incorporate insights from 13 interviews with professionals dedicated to serving the homeless population. These insights offer a deeper understanding of the intricacies of current homelessness service provision. Our findings reveal the complex dynamics of providing services to homeless individuals, and the importance of aligning these efforts with the broader national strategies for tackling homelessness. Drawing on a comprehensive dataset, we offer a nuanced view of the challenges and successes in implementing inclusive housing policies in the Spanish context. Our research highlights the importance of collaboration between policy makers, service providers and advocates to create a cohesive and effective approach. By fostering such collaboration, we aim to create a more inclusive and comprehensive strategy to address homelessness in Spain and possible affordable housing proposals for this vulnerable group. It´s only underscores the importance of tailored approaches but also contributes to the broader discourse on housing public policies' ability to address homelessness and foster integration. We suggest that a more comprehensive approach, considering the unique needs of immigrants and working in collaboration with professionals in the field, is essential for the development of effective strategies to combat homelessness and ensure the right to adequate housing for all.

Keywords: housing, homeless, public policy, Spain

Procedia PDF Downloads 48
216 Factors Associated with Involvement in Physical Activity among Children (Aged 6-18 Years) Training at Excel Soccer Academy in Uganda

Authors: Syrus Zimaze, George Nsimbe, Valley Mugwanya, Matiya Lule, Edgar Watson, Patrick Gwayambadde

Abstract:

Physical inactivity is a growing global epidemic, also recognised as a major public health challenge. Globally, there are alarming rates of children reported with cardiovascular disease and obesity with limited interventions. In Sub Saharan Africa, there is limited information about involvement in physical activity especially among children aged 6 to 18 years. The aim of this study was to explore factors associated with involvement in physical activity among children in Uganda. Methods: We included all parents with children aged 6 to 18 years training with Excel Soccer Academy between January 2017 and June 2018. Physical activity definition was time spent participating in routine soccer training at the academy for more than 30 days. Each child's attendance was recorded, and parents provided demographic and social economic data. Data on predictors of physical activity involvement were collected using a standardized questionnaire. Descriptive statistics and frequency were used. Binary logistic regression was used at the multi variable level adjusting for education, residence, transport means and access to information technology. Results: Overall 356 parents were interviewed; Boys 318 (89.3%) engaged more in physical activity than girls. The median age for children was 13 years (IQR:6-18) and 42 years (IQR:37-49) among parents. The median time spent at the Excel soccer academy was 13.4 months (IQR: 4.6-35.7) Majority of the children attended formal education, p < 0.001). Factors associated with involvement in physical activity included: owning a permanent house compared to a rented house (odds ratio [OR] :2.84: 95% CI: 2.09-3.86, p < 0.0001), owning a car compared to using public transport (OR: 5.64 CI: 4.80-6.63, p < 0.0001), a parent having received formal education compared to non-formal education (OR: 2.93 CI: 2.47-3.46, p < 0.0001) and daily access to information technology (OR:0.40 CI:0.25-0.66, p < 0.001). Parent’s age and gender were not associated to involvement in physical activity. Conclusions: Socioeconomic factors were positively associated with involvement in physical activity with boys participating more than girls in soccer activities. More interventions are required geared towards increasing girl’s participation in physical activity and those targeting children from less privilege homes.

Keywords: physical activity, Sub-Saharan Africa, social economic factors, children

Procedia PDF Downloads 135
215 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 348
214 Kinetic Energy Recovery System Using Spring

Authors: Mayuresh Thombre, Prajyot Borkar, Mangirish Bhobe

Abstract:

New advancement of technology and never satisfying demands of the civilization are putting huge pressure on the natural fuel resources and these resources are at a constant threat to its sustainability. To get the best out of the automobile, the optimum balance between performance and fuel economy is important. In the present state of art, either of the above two aspects are taken into mind while designing and development process which puts the other in the loss as increase in fuel economy leads to decrement in performance and vice-versa. In-depth observation of the vehicle dynamics apparently shows that large amount of energy is lost during braking and likewise large amount of fuel is consumed to reclaim the initial state, this leads to lower fuel efficiency to gain the same performance. Current use of Kinetic Energy Recovery System is only limited to sports vehicles only because of the higher cost of this system. They are also temporary in nature as power can be squeezed only during a small time duration and use of superior parts leads to high cost, which results on concentration on performance only and neglecting the fuel economy. In this paper Kinetic Energy Recovery System for storing the power and then using the same while accelerating has been discussed. The major storing element in this system is a Flat Spiral Spring that will store energy by compression and torsion. The use of spring ensure the permanent storage of energy until used by the driver unlike present mechanical regeneration system in which the energy stored decreases with time and is eventually lost. A combination of internal gears and spur gears will be used in order to make the energy release uniform which will lead to safe usage. The system can be used to improve the fuel efficiency by assisting in overcoming the vehicle’s inertia after braking or to provide instant acceleration whenever required by the driver. The performance characteristics of the system including response time, mechanical efficiency and overall increase in efficiency are demonstrated. This technology makes the KERS (Kinetic Energy Recovery System) more flexible and economical allowing specific application while at the same time increasing the time frame and ease of usage.

Keywords: electric control unit, energy, mechanical KERS, planetary gear system, power, smart braking, spiral spring

Procedia PDF Downloads 180
213 The Practice of Low Flow Anesthesia to Reduce Carbon Footprints Sustainability Project

Authors: Ahmed Eid, Amita Gupta

Abstract:

Abstract: Background: Background Medical gases are estimated to contribute to 5% of the carbon footprints produced by hospitals, Desflurane has the largest impact, but all increase significantly when used with N2O admixture. Climate Change Act 2008, we must reduce our carbon emission by 80% of the 1990 baseline by 2050.NHS carbon emissions have reduced by 18.5% (2007-2017). The NHS Long Term Plan has outlined measures to achieve this objective, including a 2% reduction by transforming anaesthetic practices. FGF is an important variable that determines the utilization of inhalational agents and can be tightly controlled by the anaesthetist. Aims and Objectives Environmental safety, Identification of areas of high N20 and different anaesthetic agents used across the St Helier operating theatres and consider improvising on the current practice. Methods: Data was collected from St Helier operating theatres and retrieved daily from Care Station 650 anaesthetic machines. 60 cases were included in the sample. Collected data (average flow rate, amount and type of agent used, duration of surgery, type of surgery, duration, and the total amount of Air, O2 and N2O used. AAGBI impact anaesthesia calculator was used to identify the amount of CO2 produced and also the cost per hour for every pt. Communication via reminder emails to staff emphasized the significance of low-flow anaesthesia and departmental meeting presentations aimed at heightening awareness of LFA, Distribution of AAGBI calculator QR codes in all theatres enables the calculation of volatile anaesthetic consumption and CO2e post each case, facilitating informed environmental impact assessment. Results: A significant reduction in the flow rate use in the 2nd sample was observed, flow rate usage between 0-1L was 60% which means a great reduction of the consumption of volatile anaesthetics and also Co2e. By using LFA we can save money but most importantly we can make our lives much greener and save the planet.

Keywords: low flow anesthesia, sustainability project, N₂0, Co2e

Procedia PDF Downloads 40
212 Influence of Machine Resistance Training on Selected Strength Variables among Two Categories of Body Composition

Authors: Hassan Almoslim

Abstract:

Background: The machine resistance training is an exercise that uses the equipment as loads to strengthen and condition the musculoskeletal system and improving muscle tone. The machine resistance training is easy to use, allow the individual to train with heavier weights without assistance, useful for beginners and elderly populations and specific muscle groups. Purpose: The purpose of this study was to examine the impact of nine weeks of machine resistance training on maximum strength among lean and normal weight male college students. Method: Thirty-six male college students aged between 19 and 21 years from King Fahd University of petroleum & minerals participated in the study. The subjects were divided into two an equal groups called Lean Group (LG, n = 18) and Normal Weight Group (NWG, n = 18). The subjects whose body mass index (BMI) is less than 18.5 kg / m2 is considered lean and who is between 18.5 to 24.9 kg / m2 is normal weight. Both groups performed machine resistance training nine weeks, twice per week for 40 min per training session. The strength measurements, chest press, leg press and abdomen exercises were performed before and after the training period. 1RM test was used to determine the maximum strength of all subjects. The training program consisted of several resistance machines such as leg press, abdomen, chest press, pulldown, seated row, calf raises, leg extension, leg curls and back extension. The data were analyzed using independent t-test (to compare mean differences) and paired t-test. The level of significance was set at 0.05. Results: No change was (P ˃ 0.05) observed in all body composition variables between groups after training. In chest press, the NWG recorded a significantly greater mean different value than the LG (19.33 ± 7.78 vs. 13.88 ± 5.77 kg, respectively, P ˂ 0.023). In leg press and abdomen exercises, both groups revealed similar mean different values (P ˃ 0.05). When the post-test was compared with the pre-test, the NWG showed significant increases in the chest press by 47% (from 41.16 ± 12.41 to 60.49 ± 11.58 kg, P ˂ 001), abdomen by 34% (from 45.46 ± 6.97 to 61.06 ± 6.45 kg, P ˂ 0.001) and leg press by 23.6% (from 85.27 ± 15.94 to 105.48 ± 21.59 kg, P ˂ 0.001). The LG also illustrated significant increases by 42.6% in the chest press (from 32.58 ± 7.36 to 46.47 ± 8.93 kg, P ˂ 0.001), the abdomen by 28.5% (from 38.50 ± 7.84 to 49.50 ± 7.88 kg, P ˂ 0.001) and the leg press by 30.8% (from 70.2 ± 20.57 to 92.01 ± 22.83 kg, P ˂ 0.001). Conclusion: It was concluded that the lean and the normal weight male college students can benefit from the machine resistance-training program remarkably.

Keywords: body composition, lean, machine resistance training, normal weight

Procedia PDF Downloads 333
211 Machine That Provides Mineral Fertilizer Equal to the Soil on the Slopes

Authors: Huseyn Nuraddin Qurbanov

Abstract:

The reliable food supply of the population of the republic is one of the main directions of the state's economic policy. Grain growing, which is the basis of agriculture, is important in this area. In the cultivation of cereals on the slopes, the application of equal amounts of mineral fertilizers the under the soil before sowing is a very important technological process. The low level of technical equipment in this area prevents producers from providing the country with the necessary quality cereals. Experience in the operation of modern technical means has shown that, at present, there is a need to provide an equal amount of fertilizer on the slopes to under the soil, fully meeting the agro-technical requirements. No fundamental changes have been made to the industrial machines that fertilize the under the soil, and unequal application of fertilizers under the soil on the slopes has been applied. This technological process leads to the destruction of new seedlings and reduced productivity due to intolerance to frost during the winter for the plant planted in the fall. In special climatic conditions, there is an optimal fertilization rate for each agricultural product. The application of fertilizers to the soil is one of the conditions that increase their efficiency in the field. As can be seen, the development of a new technical proposal for fertilizing and plowing the slopes in equal amounts on the slopes, improving the technological and design parameters, and taking into account the physical and mechanical properties of fertilizers is very important. Taking into account the above-mentioned issues, a combined plough was developed in our laboratory. Combined plough carries out pre-sowing technological operation in the cultivation of cereals, providing a smooth equal amount of mineral fertilizers under the soil on the slopes. Mathematical models of a smooth spreader that evenly distributes fertilizers in the field have been developed. Thus, diagrams and graphs obtained without distribution on the 8 partitions of the smooth spreader are constructed under the inclined angles of the slopes. Percentage and productivity of equal distribution in the field were noted by practical and theoretical analysis.

Keywords: combined plough, mineral fertilizer, equal sowing, fertilizer norm, grain-crops, sowing fertilizer

Procedia PDF Downloads 117
210 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms

Authors: Habtamu Ayenew Asegie

Abstract:

Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.

Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction

Procedia PDF Downloads 14
209 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 59
208 Meniere's Disease and its Prevalence, Symptoms, Risk Factors and Associated Treatment Solutions for this Disease

Authors: Amirreza Razzaghipour Sorkhab

Abstract:

One of the most common disorders among humans is hearing impairment. This paper provides an evidence base that recovers understanding of Meniere’s disease and highlights the physical and mental health correlates of the disorder. Meniere's disease is more common in the elderly. The term idiopathic endolymphatic hydrops has been attributed to this disease by some in the previous. Meniere’s disease demonstrations a genetic tendency, and a family history is found in 10% of cases, with an autosomal dominant inheritance pattern. The COCH gene may be one of the hereditary factors contributing to Meniere’s disease, and the possibility of a COCH mutation should be considered in patients with Meniere’s disease symptoms. Should be considered Missense mutations in the COCH gene cause the autosomal dominant sensorineural hearing loss and vestibular disorder. Meniere’s disease is a complex, heterogeneous disorder of the inner ear and that is characterized by episodes of vertigo lasting from minutes to hours, fluctuating sensorineural hearing loss, tinnitus, and aural fullness. The existing evidence supports the suggestion that age and sleep disorder are risk factors for Meniere's disease. Many factors have been reported to precipitate the progress of Menier, including endolymphatic hydrops, immunology, viral infection, inheritance, vestibular migraine, and altered intra-labyrinthine fluid dynamics. Although there is currently no treatment that has a proven helpful effect on hearing levels or on the long-term evolution of the disease, however, in the primary stages, the hearing may improve among attacks, but a permanent hearing loss occurs in the majority of cases. Current publications have proposed a role for the intratympanic use of medicine, mostly aminoglycosides, for the control of vertigo. more than 85% of patients with Meniere's disease are helped by either changes in lifestyle and medical treatment or minimally aggressive surgical procedures such as intratympanic steroid therapy, intratympanic gentamicin therapy, and endolymphatic sac surgery. However, unilateral vestibular extirpation methods (intratympanic gentamicin, vestibular nerve section, or labyrinthectomy) are more predictable but invasive approaches to control the vertigo attacks. Medical therapy aimed at reducing endolymph volume, such as low-sodium diet, diuretic use, is the typical initial treatment.

Keywords: meniere's disease, endolymphatic hydrops, hearing loss, vertigo, tinnitus, COCH gene

Procedia PDF Downloads 65
207 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity

Procedia PDF Downloads 253
206 Exploring Causes of Irregular Migration: Evidence from Rural Punjab, India

Authors: Kulwinder Singh

Abstract:

Punjab is one of the major labour exporting states of India. Every year more than 20,000 youths from Punjab attempt irregular migration. About 84 irregular migrants are from rural areas and 16 per cent from urban areas. Irregular migration could only be achieved if be organized through highly efficient international networks with the countries of origin, transit, and destination. A good number of Punjabis continue to immigrate into the UK for work through unauthorized means entering the country on visit visas and overstaying or getting ‘smuggled into’ the country with the help of transnational networks of agents. Although, the efforts are being made by the government to curb irregular migration through The Punjab Prevention of Human Smuggling Rules (2012, 2014) and Punjab Travel Regulation Act (2012), but yet it exists parallel to regular migration. Despite unprecedented miseries of irregular migrants and strict laws implemented by the state government to check this phenomenon, ‘why do Punjabis migrate abroad irregularly’ is the important question to answer. This study addresses this question through the comparison of irregular migration with regular one. In other words, this analysis reveals major causes, specifically economic ones, of irregular migration from rural Punjab. This study is unique by presenting economics of irregular migration, given previous studies emphasize the role of sociological and psychological factors. Addressing important question “why do Punjabis migrate abroad irregularly?”, the present study reveals that Punjabi, being far-sighted, endeavor irregular migration as it is, though, economically nonviable in short run, but offers lucrative economic gains as gets older. Despite its considerably higher cost viz-a-viz regular migration, it is the better employment option to irregular migrants with higher permanent income than local low paid jobs for which risking life has become the mindset of the rural Punjabis. Although, it carries considerably lower economic benefits as compared to regular migration, but provides the opportunity of migrating abroad to less educated, semi-skilled and language-test ineligible Punjabis who cannot migrate through regular channels. As its positive impacts on source and destination countries are evident, it might not be restricted, rather its effective management, through liberalising restrictive migration policies by destination nations, can protect the interests of all involved stakeholders.

Keywords: cost, migration, income, irregular, regular, remittances

Procedia PDF Downloads 100
205 Dematerialized Beings in Katherine Dunn's Geek Love: A Corporeal and Ethical Study under Posthumanities

Authors: Anum Javed

Abstract:

This study identifies the dynamical image of human body that continues its metamorphosis in the virtual field of reality. It calls attention to the ways where humans start co-evolving with other life forms; technology in particular and are striving to establish a realm outside the physical framework of matter. The problem exceeds the area of technological ethics by explicably and explanatorily entering the space of literary texts and criticism. Textual analysis of Geek Love (1989) by Katherine Dunn is adjoined with posthumanist perspectives of Pramod K. Nayar to beget psycho-somatic changes in man’s nature of being. It uncovers the meaning people give to their experiences in this budding social and cultural phenomena of material representation tied up with personal practices and technological innovations. It also observes an ethical, physical and psychological reassessment of man within the context of technological evolutions. The study indicates the elements that have rendered morphological freedom and new materialism in man’s consciousness. Moreover this work is inquisitive of what it means to be a human in this time of accelerating change where surgeries, implants, extensions, cloning and robotics have shaped a new sense of being. It attempts to go beyond individual’s body image and explores how objectifying media and culture have influenced people’s judgement of others on new material grounds. It further argues a decentring of the glorified image of man as an independent entity because of his energetic partnership with intelligent machines and external agents. The history of the future progress of technology is also mentioned. The methodology adopted is posthumanist techno-ethical textual analysis. This work necessitates a negotiating relationship between man and technology in order to achieve harmonic and balanced interconnected existence. The study concludes by recommending a call for an ethical set of codes to be cultivated for the techno-human habituation. Posthumanism ushers a strong need of adopting new ethics within the terminology of neo-materialist humanism.

Keywords: corporeality, dematerialism, human ethos, posthumanism

Procedia PDF Downloads 121
204 Smart Automated Furrow Irrigation: A Preliminary Evaluation

Authors: Jasim Uddin, Rod Smith, Malcolm Gillies

Abstract:

Surface irrigation is the most popular irrigation method all over the world. However, two issues: low efficiency and huge labour involvement concern irrigators due to scarcity in recent years. To address these issues, a smart automated furrow is conceptualised that can be operated using digital devices like smartphone, iPad or computer and a preliminary evaluation was conducted in this study. The smart automated system is the integration of commercially available software and hardware. It includes real-time surface irrigation optimisation software (SISCO) and Rubicon Water’s surface irrigation automation hardware and software. The automated system consists of automatic water delivery system with 300 mm flexible pipes attached to both sides of a remotely controlled valve to operate the irrigation. A water level sensor to obtain the real-time inflow rate from the measured head in the channel, advance sensors to measure the advance time to particular points of an irrigated field, a solar-powered telemetry system including a base station to communicate all the field sensors with the main server. On the basis of field data, the software (SISCO) is optimised the ongoing irrigation and determine the optimum cut-off for particular irrigation and send this information to the control valve to stop the irrigation in a particular (cut-off) time. The preliminary evaluation shows that the automated surface irrigation worked reasonably well without manual intervention. The evaluation of farmers managed irrigation events show the potentials to save a significant amount of water and labour. A substantial amount of economic and social benefits are expected in rural industries by adopting this system. The future outcome of this work would be a fully tested commercial adaptive real-time furrow irrigation system able to compete with the pressurised alternative of centre pivot or lateral move machines on capital cost, water and labour savings but without the massive energy costs.

Keywords: furrow irrigation, smart automation, infiltration, SISCO, real-time irrigation, adoptive control

Procedia PDF Downloads 423
203 Numerical Board Game for Low-Income Preschoolers

Authors: Gozde Inal Kiziltepe, Ozgun Uyanik

Abstract:

There is growing evidence that socioeconomic (SES)-related differences in mathematical knowledge primarily start in early childhood period. Preschoolers from low-income families are likely to perform substantially worse in mathematical knowledge than their counterparts from middle and higher income families. The differences are seen on a wide range of recognizing written numerals, counting, adding and subtracting, and comparing numerical magnitudes. Early differences in numerical knowledge have a permanent effect childrens’ mathematical knowledge in other grades. In this respect, analyzing the effect of number board game on the number knowledge of 48-60 month-old children from disadvantaged low-income families constitutes the main objective of the study. Participants were the 71 preschoolers from a childcare center which served low-income urban families. Children were randomly assigned to the number board condition or to the color board condition. The number board condition included 35 children and the color board game condition included 36 children. Both board games were 50 cm long and 30 cm high; had ‘The Great Race’ written across the top; and included 11 horizontally arranged, different colored squares of equal sizes with the leftmost square labeled ‘Start’. The numerical board had the numbers 1–10 in the rightmost 10 squares; the color board had different colors in those squares. A rabbit or a bear token were presented to children for selecting, and on each trial spun a spinner to determine whether the token would move one or two spaces. The number condition spinner had a ‘1’ half and a ‘2’ half; the color condition spinner had colors that matched the colors of the squares on the board. Children met one-on-one with an experimenter for four 15- to 20-min sessions within a 2-week period. In the first and fourth sessions, children were administered identical pretest and posttest measures of numerical knowledge. All children were presented three numerical tasks and one subtest presented in the following order: counting, numerical magnitude comparison, numerical identification and Count Objects – Circle Number Probe subtest of Early Numeracy Assessment. In addition, same numerical tasks and subtest were given as a follow-up test four weeks after the post-test administration. Findings obtained from the study; showed that there was a meaningful difference between scores of children who played a color board game in favor of children who played number board game.

Keywords: low income, numerical board game, numerical knowledge, preschool education

Procedia PDF Downloads 331
202 Doing Durable Organisational Identity Work in the Transforming World of Work: Meeting the Challenge of Different Workplace Strategies

Authors: Theo Heyns Veldsman, Dieter Veldsman

Abstract:

Organisational Identity (OI) refers to who and what the organisation is, what it stands for and does, and what it aspires to become. OI explores the perspectives of how we see ourselves, are seen by others and aspire to be seen. It provides as rationale the ‘why’ for the organisation’s continued existence. The most widely accepted differentiating features of OI are encapsulated in the organisation’s core, distinctive, differentiating, and enduring attributes. OI finds its concrete expression in the organisation’s Purpose, Vision, Strategy, Core Ideology, and Legacy. In the emerging new order infused by hyper-turbulence and hyper-fluidity, the VICCAS world, OI provides a secure anchor and steady reference point for the organisation, particularly the growing widespread focus on Purpose, which is indicative of the organisation’s sense of social citizenship. However, the transforming world of work (TWOW) - particularly the potent mix of ongoing disruptive innovation, the 4th Industrial Revolution, and the gig economy with the totally unpredicted COVID19 pandemic - has resulted in the consequential adoption of different workplace strategies by organisations in terms of how, where, and when work takes place. Different employment relations (transient to permanent); work locations (on-site to remote); work time arrangements (full-time at work to flexible work schedules); and technology enablement (face-to-face to virtual) now form the basis of the employer/employee relationship. The different workplace strategies, fueled by the demands of TWOW, pose a substantive challenge to organisations of doing durable OI work, able to fulfill OI’s critical attributes of core, distinctive, differentiating, and enduring. OI work is contained in the ongoing, reciprocally interdependent stages of sense-breaking, sense-giving, internalisation, enactment, and affirmation. The objective of our paper is to explore how to do durable OI work relative to different workplace strategies in the TWOW. Using a conceptual-theoretical approach from a practice-based orientation, the paper addresses the following topics: distinguishes different workplace strategies based upon a time/place continuum; explicates stage-wise the differential organisational content and process consequences of these strategies for durable OI work; indicates the critical success factors of durable OI work under these differential conditions; recommends guidelines for OI work relative to TWOW; and points out ethical implications of all of the above.

Keywords: organisational identity, workplace strategies, new world of work, durable organisational identity work

Procedia PDF Downloads 173
201 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 243
200 Three Issues for Integrating Artificial Intelligence into Legal Reasoning

Authors: Fausto Morais

Abstract:

Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.

Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning

Procedia PDF Downloads 121
199 Ytterbium Advantages for Brachytherapy

Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev

Abstract:

High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.

Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources

Procedia PDF Downloads 465
198 Ecophysiological Features of Acanthosicyos horridus (!Nara) to Survive the Namib Desert

Authors: Jacques M. Berner, Monja Gerber, Gillian L. Maggs-Kolling, Stuart J. Piketh

Abstract:

The enigmatic melon species, Acanthosicyos horridus Welw. ex Hook. f., locally known as !nara, is endemic to the hyper-arid Namib Desert, where it thrives in sandy dune areas and dry river banks. The Namib Desert is characterized by extreme weather conditions which include high temperatures, very low rainfall, and extremely dry air. Plant and animals that have made the Namib Dessert their home are dependent on non-rainfall water inputs, like fog, dew and water vapor, for survival. Fog is believed to be the most important non-rainfall water input for most of the coastal Namib Desert and is a life line to many Namib plants and animals. It is commonly assumed that the !nara plant is adapted and dependent upon coastal fog events. The !nara plant shares many comparable adaptive features with other organisms that are known to exploit fog as a source of moisture. These include groove-like structures on the stems and the cone-like structures of thorns. These structures are believed to be the driving forces behind directional water flow that allow plants to take advantage of fog events. The !nara-fog interaction was investigated in this study to determine the dependence of !nara on these fog events, as it would illustrate strategies to benefit from non-rainfall water inputs. The direct water uptake capacity of !nara shoots was investigated through absorption tests. Furthermore, the movement and behavior of fluorescent water droplets on a !nara stem were investigated through time-lapse macrophotography. The shoot water potential was measured to investigate the effect of fog on the water status of !nara stems. These tests were used to determine whether the morphology of !nara has evolved to exploit fog as a non-rainfall water input and whether the !nara plant has adapted physiologically in response to fog. Chlorophyll a fluorescence was used to compare the photochemical efficiency of !nara plants on days with fog events to that on non-foggy days. The results indicate that !nara plants do have the ability to take advantage of fog events as commonly believed. However, the !nara plant did not exhibit visible signs of drought stress and this, together with the strong shoot water potential, indicates that these plants are reliant on permanent underground water sources. Chlorophyll a fluorescence data indicated that temperature stress and wind were some of the main abiotic factors influencing the plants’ overall vitality.

Keywords: Acanthosicyos horridus, chlorophyll a fluorescence, fog, foliar absorption, !nara

Procedia PDF Downloads 129
197 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 92
196 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 120
195 A Cognitive Training Program in Learning Disability: A Program Evaluation and Follow-Up Study

Authors: Krisztina Bohacs, Klaudia Markus

Abstract:

To author’s best knowledge we are in absence of studies on cognitive program evaluation and we are certainly short of programs that prove to have high effect sizes with strong retention results. The purpose of our study was to investigate the effectiveness of a comprehensive cognitive training program, namely BrainRx. This cognitive rehabilitation program target and remediate seven core cognitive skills and related systems of sub-skills through repeated engagement in game-like mental procedures delivered one-on-one by a clinician, supplemented by digital training. A larger sample of children with learning disability were given pretest and post-test cognitive assessments. The experimental group completed a twenty-week cognitive training program in a BrainRx center. A matched control group received another twenty-week intervention with Feuerstein’s Instrumental Enrichment programs. A second matched control group did not receive training. As for pre- and post-test, we used a general intelligence test to assess IQ and a computer-based test battery for assessing cognition across the lifespan. Multiple regression analyses indicated that the experimental BrainRx treatment group had statistically significant higher outcomes in attention, working memory, processing speed, logic and reasoning, auditory processing, visual processing and long-term memory compared to the non-treatment control group with very large effect sizes. With the exception of logic and reasoning, the BrainRx treatment group realized significantly greater gains in six of the above given seven cognitive measures compared to the Feuerstein control group. Our one-year retention measures showed that all the cognitive training gains were above ninety percent with the greatest retention skills in visual processing, auditory processing, logic, and reasoning. The BrainRx program may be an effective tool to establish long-term cognitive changes in case of students with learning disabilities. Recommendations are made for treatment centers and special education institutions on the cognitive training of students with special needs. The importance of our study is that targeted, systematic, progressively loaded and intensive brain training approach may significantly change learning disabilities.

Keywords: cognitive rehabilitation training, cognitive skills, learning disability, permanent structural cognitive changes

Procedia PDF Downloads 178