Search results for: dynamic marketing capabilities
802 Performance Assessment of Carrier Aggregation-Based Indoor Mobile Networks
Authors: Viktor R. Stoynov, Zlatka V. Valkova-Jarvis
Abstract:
The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement in the overall performance in Next Generation Networks (NGNs). Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, enables more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A (Long Term Evolution–Advanced) CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. In this way, increased data rates and dynamic load balancing can be achieved, resulting in a more reliable and efficient operation of mobile networks and the enabling of high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the utilisation of spectrum bands are compared in indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters, into one summative assessment termed a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The experimental results demonstrate that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, we show that an increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, often located in the periphery of the cells, can be improved by intelligent CA location. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.Keywords: comparative factor, carrier aggregation, indoor mobile network, resource allocation
Procedia PDF Downloads 180801 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor
Authors: D. Ramajo, S. Corzo, M. Nigro
Abstract:
A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.Keywords: PHWR, CFD, thermo-hydraulic, two-phase flow
Procedia PDF Downloads 469800 Enhancement of Long Term Peak Demand Forecast in Peninsular Malaysia Using Hourly Load Profile
Authors: Nazaitul Idya Hamzah, Muhammad Syafiq Mazli, Maszatul Akmar Mustafa
Abstract:
The peak demand forecast is crucial to identify the future generation plant up needed in the long-term capacity planning analysis for Peninsular Malaysia as well as for the transmission and distribution network planning activities. Currently, peak demand forecast (in Mega Watt) is derived from the generation forecast by using load factor assumption. However, a forecast using this method has underperformed due to the structural changes in the economy, emerging trends and weather uncertainty. The dynamic changes of these drivers will result in many possible outcomes of peak demand for Peninsular Malaysia. This paper will look into the independent model of peak demand forecasting. The model begins with the selection of driver variables to capture long-term growth. This selection and construction of variables, which include econometric, emerging trend and energy variables, will have an impact on the peak forecast. The actual framework begins with the development of system energy and load shape forecast by using the system’s hourly data. The shape forecast represents the system shape assuming all embedded technology and use patterns to continue in the future. This is necessary to identify the movements in the peak hour or changes in the system load factor. The next step would be developing the peak forecast, which involves an iterative process to explore model structures and variables. The final step is combining the system energy, shape, and peak forecasts into the hourly system forecast then modifying it with the forecast adjustments. Forecast adjustments are among other sales forecasts for electric vehicles, solar and other adjustments. The framework will result in an hourly forecast that captures growth, peak usage and new technologies. The advantage of this approach as compared to the current methodology is that the peaks capture new technology impacts that change the load shape.Keywords: hourly load profile, load forecasting, long term peak demand forecasting, peak demand
Procedia PDF Downloads 173799 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience
Authors: Nkwenti Mbelli Njah
Abstract:
This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners
Procedia PDF Downloads 164798 Third Eye: A Hybrid Portrayal of Visuospatial Attention through Eye Tracking Research and Modular Arithmetic
Authors: Shareefa Abdullah Al-Maqtari, Ruzaika Omar Basaree, Rafeah Legino
Abstract:
A pictorial representation of hybrid forms in science-art collaboration has become a crucial issue in the course of exploring a new painting technique development. This is straight related to the reception of an invisible-recognition phenomenology. In hybrid pictorial representation of invisible-recognition phenomenology, the challenging issue is how to depict the pictorial features of indescribable objects from its mental source, modality and transparency. This paper proposes the hybrid technique of painting Demonstrate, Resemble, and Synthesize (DRS) through a combination of the hybrid aspect-recognition representation of understanding picture, demonstrative mod, the number theory, pattern in the modular arithmetic system, and the coherence theory of visual attention in the dynamic scenes representation. Multi-methods digital gaze data analyses, pattern-modular table operation design, and rotation parameter were used for the visualization. In the scientific processes, Eye-trackingvideo-sections based was conducted using Tobii T60 remote eye tracking hardware and TobiiStudioTM analysis software to collect and analyze the eye movements of ten participants when watching the video clip, Alexander Paulikevitch’s performance’s ‘Tajwal’. Results: we found that correlation of fixation count in section one was positively and moderately correlated with section two Person’s (r=.10, p < .05, 2-tailed) as well as in fixation duration Person’s (r=.10, p < .05, 2-tailed). However, a paired-samples t-test indicates that scores were significantly higher for the section one (M = 2.2, SD = .6) than for the section two (M = 1.93, SD = .6) t(9) = 2.44, p < .05, d = 0.87. In the visual process, the exported data of gaze number N was resembled the hybrid forms of visuospatial attention using the table-mod-analyses operation. The explored hybrid guideline was simply applicable, and it could be as alternative approach to the sustainability of contemporary visual arts.Keywords: science-art collaboration, hybrid forms, pictorial representation, visuospatial attention, modular arithmetic
Procedia PDF Downloads 364797 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods
Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana
Abstract:
Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management
Procedia PDF Downloads 196796 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 286795 Feasibility Study on the Application of Waste Materials for Production of Sustainable Asphalt Mixtures
Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman
Abstract:
Road networks are expanding all over the world during the past few decades to meet the increasing freight volumes created by the population growth and industrial development. At the same time, the rate of generation of solid wastes in the society is increasing with the population growth, technological development, and changes in the lifestyle of people. Thus, the management of solid wastes has become an acute problem. Accordingly, there is a need for greater efficiency in the construction and maintenance of road networks, in reducing the overall cost, especially the utilization of natural materials such as aggregates. An efficient means to reduce construction and maintenance costs of road networks is to replace natural (virgin) materials by secondary, recycled materials. Recycling will also help to reduce pressure on landfills and demand for extraction of natural virgin materials thus ensuring sustainability. Application of solid wastes in asphalt layer reduces not only environmental issues associated with waste disposal but also the demand for virgin materials which will subsequently result in sustainability. Therefore, this research aims to investigate the feasibility of the application of some of the waste materials such as glass, construction and demolition wastes, etc. as alternative materials in pavement construction, particularly flexible pavements. To this end, various combination of different waste materials in certain percentages is considered in designing the asphalt mixture. One of the goals of this research is to determine the optimum percentage of all these materials in the mixture. This is done through a series of tests to evaluate the volumetric properties and resilient modulus of the mixture. The information and data collected from these tests are used to select the adequate samples for further assessment through advanced tests such as triaxial dynamic test and fatigue test, in order to investigate the asphalt mixture resistance to permanent deformation and also cracking. This paper presents the results of these investigations on the application of waste materials in asphalt mixture for production of a sustainable asphalt mix.Keywords: asphalt, glass, pavement, recycled aggregate, sustainability
Procedia PDF Downloads 237794 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds
Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia
Abstract:
Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is actual for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during a presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminium unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, chloride in alkaline environment at 80-90ºC tempertures. β-Al(OH)3 has been received from aluminium powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70 – 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, at the time in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). During the work the following devices have been used: X-ray difractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.Keywords: α-Alumina, combustion, phase transformation, seeding
Procedia PDF Downloads 395793 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 34792 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions
Authors: Yakubu Adamu
Abstract:
The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol
Procedia PDF Downloads 10791 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing
Authors: Huan Ting Liao
Abstract:
In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning
Procedia PDF Downloads 26790 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator
Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty
Abstract:
Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state
Procedia PDF Downloads 266789 A Tool Tuning Approximation Method: Exploration of the System Dynamics and Its Impact on Milling Stability When Amending Tool Stickout
Authors: Nikolai Bertelsen, Robert A. Alphinas, Klaus B. Orskov
Abstract:
The shortest possible tool stickout has been the traditional go-to approach with expectations of increased stability and productivity. However, experimental studies at Danish Advanced Manufacturing Research Center (DAMRC) have proven that for some tool stickout lengths, there exist local productivity optimums when utilizing the Stability Lobe Diagrams for chatter avoidance. This contradicts with traditional logic and the best practices taught to machinists. This paper explores the vibrational characteristics and behaviour of a milling system over the tool stickout length. The experimental investigation has been conducted by tap testing multiple endmills where the tool stickout length has been varied. For each length, the modal parameters have been recorded and mapped to visualize behavioural tendencies. Furthermore, the paper explores the correlation between the modal parameters and the Stability Lobe Diagram to outline the influence and importance of each parameter in a multi-mode system. The insights are conceptualized into a tool tuning approximation solution. It builds on an almost linear change in the natural frequencies when amending tool stickout, which results in changed positions of the Chatter-free Stability Lobes. Furthermore, if the natural frequency of two modes become too close, it will onset of the dynamic absorber effect phenomenon. This phenomenon increases the critical stable depth of cut, allowing for a more stable milling process. Validation tests on the tool tuning approximation solution have shown varying success of the solution. This outlines the need for further research on the boundary conditions of the solution to understand at which conditions the tool tuning approximation solution is applicable. If the conditions get defined, the conceptualized tool tuning approximation solution outlines an approach for quick and roughly approximating tool stickouts with the potential for increased stiffness and optimized productivity.Keywords: milling, modal parameters, stability lobes, tap testing, tool tuning
Procedia PDF Downloads 157788 Aero-Hydrodynamic Model for a Floating Offshore Wind Turbine
Authors: Beatrice Fenu, Francesco Niosi, Giovanni Bracco, Giuliana Mattiazzo
Abstract:
In recent years, Europe has seen a great development of renewable energy, in a perspective of reducing polluting emissions and transitioning to cleaner forms of energy, as established by the European Green New Deal. Wind energy has come to cover almost 15% of European electricity needs andis constantly growing. In particular, far-offshore wind turbines are attractive from the point of view of exploiting high-speed winds and high wind availability. Considering offshore wind turbine siting that combines the resources analysis, the bathymetry, environmental regulations, and maritime traffic and considering the waves influence in the stability of the platform, the hydrodynamic characteristics of the platform become fundamental for the evaluation of the performances of the turbine, especially for the pitch motion. Many platform's geometries have been studied and used in the last few years. Their concept is based upon different considerations as hydrostatic stability, material, cost and mooring system. A new method to reach a high-performances substructure for different kinds of wind turbines is proposed. The system that considers substructure, mooring, and wind turbine is implemented in Orcaflex, and the simulations are performed considering several sea states and wind speeds. An external dynamic library is implemented for the turbine control system. The study shows the comparison among different substructures and the new concepts developed. In order to validate the model, CFD simulations will be performed by mean of STAR CCM+, and a comparison between rigid and elastic body for what concerns blades and tower will be carried out. A global model will be built to predict the productivity of the floating turbine according to siting, resources, substructure, and mooring. The Levelized Cost of Electricity (LCOE) of the system is estimated, giving a complete overview about the advantages of floating offshore wind turbine plants. Different case studies will be presented.Keywords: aero-hydrodynamic model, computational fluid dynamics, floating offshore wind, siting, verification, and validation
Procedia PDF Downloads 215787 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan
Authors: Yichin Chen
Abstract:
Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.Keywords: aerial photogrammetry, landslide, landform change, Taiwan
Procedia PDF Downloads 157786 Renovate to nZEB of an Existing Building in the Mediterranean Area: Analysis of the Use of Renewable Energy Sources for the HVAC System
Authors: M. Baratieri, M. Beccali, S. Corradino, B. Di Pietra, C. La Grassa, F. Monteleone, G. Morosinotto, G. Puglisi
Abstract:
The energy renovation of existing buildings represents an important opportunity to increase the decarbonization and the sustainability of urban environments. In this context, the work carried out has the objective of demonstrating the technical and economic feasibility of an energy renovate of a public building destined for offices located on the island of Lampedusa in the Mediterranean Sea. By applying the Italian transpositions of European Directives 2010/31/EU and 2009/28/EC, the building has been renovated from the current energy requirements of 111.7 kWh/m² to 16.4 kWh/m². The result achieved classifies the building as nZEB (nearly Zero Energy Building) according to the Italian national definition. The analysis was carried out using in parallel a quasi-stationary software, normally used in the professional field, and a dynamic simulation model often used in the academic world. The proposed interventions cover the components of the building’s envelope, the heating-cooling system and the supply of energy from renewable sources. In these latter points, the analysis has focused more on assessing two aspects that affect the supply of renewable energy. The first concerns the use of advanced logic control systems for air conditioning units in order to increase photovoltaic self-consumption. With these adjustments, a considerable increase in photovoltaic self-consumption and a decrease in the electricity exported to the Island's electricity grid have been obtained. The second point concerned the evaluation of the building's energy classification considering the real efficiency of the heating-cooling plant. Normally the energy plants have lower operational efficiency than the designed one due to multiple reasons; the decrease in the energy classification of the building for this factor has been quantified. This study represents an important example for the evaluation of the best interventions for the energy renovation of buildings in the Mediterranean Climate and a good description of the correct methodology to evaluate the resulting improvements.Keywords: heat pumps, HVAC systems, nZEB renovation, renewable energy sources
Procedia PDF Downloads 453785 Psychological Capital and Intention for Self-Employment among Students in HEIs: A Multi-group Analysis Approach
Authors: Ugur Choban, Aruzhan Zhaksylyk, Assylbek Nurgabdeshov
Abstract:
In recent years, there has been an increasing understanding of the value of encouraging entrepreneurial attitudes in university students. This is motivated by the belief that stimulating entrepreneurship not only promotes economic growth but also fosters innovation. This study looks at the complex link and addresses critical gaps between psychological capital and entrepreneurial intention among university students, with a specific emphasis on how contextual factors like academic support and past business experience impact this dynamic. Using a quantitative research method, data were gathered from a broad sample of 300 university students drawn from several faculties. The study used a questionnaire that included the Psychological Capital Questionnaire (PCQ) to assess psychological capital and a validated scale for entrepreneurial intention, as well as binary measures of academic support and prior entrepreneurial experience. Statistical investigations, including multigroup analyses performed with SmartPLS software, provided interesting insights into the effect of contextual factors on the relationship between psychological capital and entrepreneurial intention. The findings highlight that psychological capital had a strong favorable influence on university students' entrepreneurial inclinations. Furthermore, the study found that academic support enhances the influence of psychological capital on entrepreneurial intentions, emphasizing the significance of institutional backing in fostering entrepreneurial mindsets. Furthermore, students with prior entrepreneurial experience had a stronger propensity for entrepreneurship, showing a synergistic link between psychological capital and entrepreneurial background. These findings have both theoretical and practical implications. By explaining the mechanisms by which psychological capital promotes entrepreneurial intentions, the study contributes to the establishment of focused entrepreneurship education programs and support activities that are suited to student requirements. Policymakers may use these findings to create policies that encourage student entrepreneurship, ultimately encouraging economic development and innovation.Keywords: academic support, entrepreneurial intentions, higher education institutions, psychological capital, prior entrepreneurial experience
Procedia PDF Downloads 58784 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 388783 Best Combination of Design Parameters for Buildings with Buckling-Restrained Braces
Authors: Ángel de J. López-Pérez, Sonia E. Ruiz, Vanessa A. Segovia
Abstract:
Buildings vulnerability due to seismic activity has been highly studied since the middle of last century. As a solution to the structural and non-structural damage caused by intense ground motions, several seismic energy dissipating devices, such as buckling-restrained braces (BRB), have been proposed. BRB have shown to be effective in concentrating a large portion of the energy transmitted to the structure by the seismic ground motion. A design approach for buildings with BRB elements, which is based on a seismic Displacement-Based formulation, has recently been proposed by the coauthors in this paper. It is a practical and easy design method which simplifies the work of structural engineers. The method is used here for the design of the structure-BRB damper system. The objective of the present study is to extend and apply a methodology to find the best combination of design parameters on multiple-degree-of-freedom (MDOF) structural frame – BRB systems, taking into account simultaneously: 1) initial costs and 2) an adequate engineering demand parameter. The design parameters considered here are: the stiffness ratio (α = Kframe/Ktotal), and the strength ratio (γ = Vdamper/Vtotal); where K represents structural stiffness and V structural strength; and the subscripts "frame", "damper" and "total" represent: the structure without dampers, the BRB dampers and the total frame-damper system, respectively. The selection of the best combination of design parameters α and γ is based on an initial costs analysis and on the structural dynamic response of the structural frame-damper system. The methodology is applied to a 12-story 5-bay steel building with BRB, which is located on the intermediate soil of Mexico City. It is found the best combination of design parameters α and γ for the building with BRB under study.Keywords: best combination of design parameters, BRB, buildings with energy dissipating devices, buckling-restrained braces, initial costs
Procedia PDF Downloads 258782 A Molecular Dynamic Simulation Study to Explore Role of Chain Length in Predicting Useful Characteristic Properties of Commodity and Engineering Polymers
Authors: Lokesh Soni, Sushanta Kumar Sethi, Gaurav Manik
Abstract:
This work attempts to use molecular simulations to create equilibrated structures of a range of commercially used polymers. Generated equilibrated structures for polyvinyl acetate (isotactic), polyvinyl alcohol (atactic), polystyrene, polyethylene, polyamide 66, poly dimethyl siloxane, poly carbonate, poly ethylene oxide, poly amide 12, natural rubber, poly urethane, and polycarbonate (bisphenol-A) and poly ethylene terephthalate are employed to estimate the correct chain length that will correctly predict the chain parameters and properties. Further, the equilibrated structures are used to predict some properties like density, solubility parameter, cohesive energy density, surface energy, and Flory-Huggins interaction parameter. The simulated densities for polyvinyl acetate, polyvinyl alcohol, polystyrene, polypropylene, and polycarbonate are 1.15 g/cm3, 1.125 g/cm3, 1.02 g/cm3, 0.84 g/cm3 and 1.223 g/cm3 respectively are found to be in good agreement with the available literature estimates. However, the critical repeating units or the degree of polymerization after which the solubility parameter showed saturation were 15, 20, 25, 10 and 20 respectively. This also indicates that such properties that dictate the miscibility of two or more polymers in their blends are strongly dependent on the chosen polymer or its characteristic properties. An attempt has been made to correlate such properties with polymer properties like Kuhn length, free volume and the energy term which plays a vital role in predicting the mentioned properties. These results help us to screen and propose a useful library which may be used by the research groups in estimating the polymer properties using the molecular simulations of chains with the predicted critical lengths. The library shall help to obviate the need for researchers to spend efforts in finding the critical chain length needed for simulating the mentioned polymer properties.Keywords: Kuhn length, Flory Huggins interaction parameter, cohesive energy density, free volume
Procedia PDF Downloads 195781 Development of Power System Stability by Reactive Power Planning in Wind Power Plant With Doubley Fed Induction Generators Generator
Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Oriol Gomis Bellmunt, Vinicius Albernaz Lacerda Freitas
Abstract:
The use of distributed and renewable sources in power systems has grown significantly, recently. One the most popular sources are wind farms which have grown massively. However, ¬wind farms are connected to the grid, this can cause problems such as reduced voltage stability, frequency fluctuations and reduced dynamic stability. Variable speed generators (asynchronous) are used due to the uncontrollability of wind speed specially Doubley Fed Induction Generators (DFIG). The most important disadvantage of DFIGs is its sensitivity to voltage drop. In the case of faults, a large volume of reactive power is induced therefore, use of FACTS devices such as SVC and STATCOM are suitable for improving system output performance. They increase the capacity of lines and also passes network fault conditions. In this paper, in addition to modeling the reactive power control system in a DFIG with converter, FACTS devices have been used in a DFIG wind turbine to improve the stability of the power system containing two synchronous sources. In the following paper, recent optimal control systems have been designed to minimize fluctuations caused by system disturbances, for FACTS devices employed. For this purpose, a suitable method for the selection of nine parameters for MPSH-phase-post-phase compensators of reactive power compensators is proposed. The design algorithm is formulated ¬¬as an optimization problem searching for optimal parameters in the controller. Simulation results show that the proposed controller Improves the stability of the network and the fluctuations are at desired speed.Keywords: renewable energy sources, optimization wind power plant, stability, reactive power compensator, double-feed induction generator, optimal control, genetic algorithm
Procedia PDF Downloads 96780 Effect of Light Spectra, Light Intensity, and HRT on the Co-Production of Phycoerythrin and Exopolysaccharides from Poprhyridium Marinum
Authors: Rosaria Tizzani, Tomas Morosinotto, Fabrizio Bezzo, Eleonora Sforza
Abstract:
Red microalga Porphyridium marinum CCAP 13807/10 has the potential to produce a broad range of commercially valuable chemicals such as PhycoErytrin (PE) and sulphated ExoPolySaccharides (EPS). Multiple abiotic factors influence the growth of Porphyridium sp., e.g. the wavelength of the light source and different cultivation strategies (one or two steps, batch, semi-, and continuous regime). The microalga of interest is cultivated in a two-step system. First, the culture grows photoautotrophically in a controlled bioreactor with pH-dependent CO2 injection, temperature monitoring, light intensity, and LED wavelength remote control in a semicontinuous mode. In the second step, the harvested biomass is subjected to mixotrophic conditions to enhance further growth. Preliminary tests have been performed to define the suitable media, salinity, pH, and organic carbon substrate to obtain the highest biomass productivity. Dynamic light and operational conditions (e.g. HRT) are evaluated to achieve high biomass production, high PE accumulation in the biomass, and high EPS release in the medium. Porphyridium marinum is able to chromatically adapt the photosynthetic apparatus to efficiently exploit the full light spectra composition. The effect of specific narrow LED wavelengths (white W, red R, green G, blue B) and a combination of LEDs (WR, WB, WG, BR, BG, RG) are identified to understand the phenomenon of chromatic adaptation under photoautotrophic conditions. The effect of light intensity, residence time, and light quality are investigated to define optimal operational strategies for full scale commercial applications. Production of biomass, phycobiliproteins, PE, EPS, EPS sulfate content, EPS composition, Chlorophyll-a, and pigment content are monitored to determine the effect of LED wavelength on the cultivation Porphyridium marinum in order to optimize the production of these multiple, highly valuable bioproducts of commercial interest.Keywords: red microalgae, LED, exopolysaccharide, phycoerythrin
Procedia PDF Downloads 108779 Tripeptide Inhibitor: The Simplest Aminogenic PEGylated Drug against Amyloid Beta Peptide Fibrillation
Authors: Sutapa Som Chaudhury, Chitrangada Das Mukhopadhyay
Abstract:
Alzheimer’s disease is a well-known form of dementia since its discovery in 1906. Current Food and Drug Administration approved medications e.g. cholinesterase inhibitors, memantine offer modest symptomatic relief but do not play any role in disease modification or recovery. In last three decades many small molecules, chaperons, synthetic peptides, partial β-secretase enzyme blocker have been tested for the development of a drug against Alzheimer though did not pass the 3rd clinical phase trials. Here in this study, we designed a PEGylated, aminogenic, tripeptidic polymer with two different molecular weights based on the aggregation prone amino acid sequence 17-20 in amyloid beta (Aβ) 1-42. Being conjugated with poly-ethylene glycol (PEG) which self-assembles into hydrophilic nanoparticles, these PEGylated tripeptides constitute a very good drug delivery system crossing the blood brain barrier while the peptide remains protected from proteolytic degradation and non-specific protein interactions. Moreover, being completely aminogenic they would not raise any side effects. These peptide inhibitors were evaluated for their effectiveness against Aβ42 fibrillation at an early stage of oligomer to fibril formation as well as preformed fibril clearance via Thioflavin T (ThT) assay, dynamic light scattering analyses, atomic force microscopy and scanning electron microscopy. The inhibitors were proved to be safe at a higher concentration of 20µM by the reduction assay of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) dye. Moreover, SHSY5Y neuroblastoma cells have shown a greater survivability when treated with the inhibitors following Aβ42 fibril and oligomer treatment as compared with the control Aβ42 fibril and/or oligomer treated neuroblastoma cells. These make the peptidic inhibitors a promising compound in the aspect of the discovery of alternative medication for Alzheimer’s disease.Keywords: Alzheimer’s disease, alternative medication, amyloid beta, PEGylated peptide
Procedia PDF Downloads 209778 Disaster Response Training Simulator Based on Augmented Reality, Virtual Reality, and MPEG-DASH
Authors: Sunho Seo, Younghwan Shin, Jong-Hong Park, Sooeun Song, Junsung Kim, Jusik Yun, Yongkyun Kim, Jong-Moon Chung
Abstract:
In order to effectively cope with large and complex disasters, disaster response training is needed. Recently, disaster response training led by the ROK (Republic of Korea) government is being implemented through a 4 year R&D project, which has several similar functions as the HSEEP (Homeland Security Exercise and Evaluation Program) of the United States, but also has several different features as well. Due to the unpredictiveness and diversity of disasters, existing training methods have many limitations in providing experience in the efficient use of disaster incident response and recovery resources. Always, the challenge is to be as efficient and effective as possible using the limited human and material/physical resources available based on the given time and environmental circumstances. To enable repeated training under diverse scenarios, an AR (Augmented Reality) and VR (Virtual Reality) combined simulator is under development. Unlike existing disaster response training, simulator based training (that allows remote login simultaneous multi-user training) enables freedom from limitations in time and space constraints, and can be repeatedly trained with different combinations of functions and disaster situations. There are related systems such as ADMS (Advanced Disaster Management Simulator) developed by ETC simulation and HLS2 (Homeland Security Simulation System) developed by ELBIT system. However, the ROK government needs a simulator custom made to the country's environment and disaster types, and also combines the latest information and communication technologies, which include AR, VR, and MPEG-DASH (Moving Picture Experts Group - Dynamic Adaptive Streaming over HTTP) technology. In this paper, a new disaster response training simulator is proposed to overcome the limitation of existing training systems, and adapted to actual disaster situations in the ROK, where several technical features are described.Keywords: augmented reality, emergency response training simulator, MPEG-DASH, virtual reality
Procedia PDF Downloads 303777 How to Motivate Child to Loose Weight When He Is Not Aware That the Overweight Is a Real Problem: «KeepHealthyKids», Study Perspectives
Authors: Daria Druzhinenko- Silhan, Patrick Schmoll
Abstract:
Childhood obesity is one of the important problem in domain of health care. During two recent decades we are observing a real epidemic of this noninfectious illness. Its consequences are hard: cardio-vascular disease; diabetes; arthrosis etc. (OMS, 2012) Keep Healthy Kids » study aims to create a new system of accompanying of childhood obesity based on new technologies as mobile applications or serious video-games. We realize a support-study which aims to understand motivations, psychological dynamite and family's impact on weight-loss process in childhood. Sample: 65 children from 7 to 10 years old accompanied by special Care Center in France. Methodology: we proceed by an innovative approach that bases on quantitative and qualitative methods of data collection. We focus our proposal on data collected from medical files. We are also realizing individual assessment (still ongoing) that aims to understand psychological profiles of obese children and their family dynamic. Results: Only 16,9% of children asked for medical accompanying of obesity. We noted that the most important reason to come to the care Center was the fact of mates' scoffs (46,2%°), the second one was the appearance or look (40 %). We found out that the self-image of these children in self-evaluation questionnaire was described mostly as rather good (46,2) or good (28,2%); the most part of children evaluated their well-being as rather good (29,7%) or good (51,4%). In interviews children had tendency to not recall why they came to the Care Center. Discussion : These results permit us to make a hypothesis that children suffering of overweight or obesity are not clearly aware why they must loose weight. It was rather the peer environment that pointed out the problem of overweight for them. So the motivation to loose weight is mostly supported by environment. We suppose that it is a « weak-point » of their motivation and it can be over-come using serious video-games supporting physical activity that can make deviate the motivation from « to loose weight for be looked better by the others » into « have fun and feeling me better ».Keywords: childhood obesity, motivation, weight-loss, serious video-game
Procedia PDF Downloads 310776 Influence of Organizational Culture on Frequency of Disputes in Commercial Projects in Egypt: A Contractor’s Perspective
Authors: Omneya N. Mekhaimer, Elkhayam M. Dorra, A. Samer Ezeldin
Abstract:
Over the recent decades, studies on organizational culture have gained global attention in the business management literature, where it has been established that the cultural factors embedded in the organization have an implicit yet significant influence on the organization’s success. Unlike other industries, the construction industry is widely known to be operating in a dynamic and adversarial nature; considering the unique characteristics it denotes, thereby the level of disputes has propagated in the construction industry throughout the years. In the late 1990s, the International Council for Research and Innovation in Building and Construction (CIB) created a Task Group (TG-23), which later evolved in 2006 into a Working Commission W112, with a strategic objective to promote research in investigating the role and impact of culture in the construction industry worldwide. To that end, this paper aims to study the influence of organizational culture in the contractor’s organization on the frequency of disputes caused between the owner and the contractor that occur in commercial projects based in Egypt. This objective is achieved by using a quantitative approach through a survey questionnaire to explore the dominant cultural attributes that exist in the contractor’s organization based on the Competing Value Framework (CVF) theory, which classifies organizational culture into four main cultural types: (1) clan, (2) adhocracy, (3) market, and (4) hierarchy. Accordingly, the collected data are statistically analyzed using Statistical Package for Social Sciences (SPSS 28) software, whereby a correlation analysis using Pearson Correlation is carried out to assess the relationship between these variables and their statistical significance using the p-value. The results show that there is an influence of organizational culture attributes on the frequency of disputes whereby market culture is identified to be the most dominant organizational culture that is currently practiced in contractor’s organization, which consequently contributes to increasing the frequency of disputes in commercial projects. These findings suggest that alternative management practices should be adopted rather than the existing ones with an aim to minimize dispute occurrence.Keywords: construction projects, correlation analysis, disputes, Egypt, organizational culture
Procedia PDF Downloads 109775 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices
Authors: P. Caimmi, E. Bele, A. Abolfathi
Abstract:
Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis
Procedia PDF Downloads 122774 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis
Authors: Sylvie Michel, François Cocula
Abstract:
Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.Keywords: adoption, digital agriculture, e-commerce, TOE framework
Procedia PDF Downloads 61773 A New Direction of Urban Regeneration: Form-Based Urban Reconstruction through the Idea of Bricolage
Authors: Hyejin Song, Jin Baek
Abstract:
Based on the idea of bricolage that a new meaning beyond that of each of objects can be created through combination and juxtaposition of various objets, this study finds a way of morphological-recomposing of urban space through combination and juxtaposition of existing urban fabric and new fabric and suggests this idea as new direction of urban regeneration. This study sets concept of bricolage as a philosophical ground of interpreting contemporary urban situation. In this concept, urban objects such as buildings from various zeitgeists are positively considered as potential textures which can construct meaningful context. Seoul, as the city having long history and experiencing colonization and development, appears dynamic urban structure full of various objects from various periods. However, in contrast with successful plazas and streets in Europe, objects in Seoul do not make a meaningful context as public space due to thoughtless development. This study defines this situation as ‘disorgnized-fabric’. Following the concept of bricolage, to find the way for those existing scattered objects to be organized as a context of meaningful public space, this study firstly researches the case of successful public space by morphological analysis. Secondly, this study carefully explores urban space in Seoul, and draws figure-ground diagram to grasp the form of current urban fabric by various urban-objects. As a result of exploration, a lot of urban spaces from Myeong-dong, one of vibrant commercial district in Seoul, to declining residential area are judged as having potential fabric which can become meaningful context by just small adjustment of relationship between existing objects. This study also confirmed that by inserting a new object with consideration of form of existing fabric, it is possible to accord a new context as plaza to existing void which have broken as several parts. This study defines it as form-based urban reconstruction through the idea of bricolage, and suggests that it could be one of philosophical ground of successful urban regeneration.Keywords: adjustment of relationship between existing objets, bricolage, morphological analysis of urban fabric, urban regeneration, urban reconstruction
Procedia PDF Downloads 320