Search results for: innovative business model
11326 The Islamic Advertising Standardisation Revisited of Food Products
Authors: Nurzahidah Haji Jaapar, Anis Husna Abdul Halim, Mohd Faiz Mohamed Yusof, Mohd Dani Muhamad, Sharifah Fadylawaty Syed Abdullah
Abstract:
The growing size of Muslim is recognised with significant increasing of purchasing power in the market. The realm of trade and business has embedded religious values as the new market segments are emerging in offering food products to meet needs and demands of Muslim consumer. The emergence of new market in food industry, advertising is charged with all sort of negative effects includes promoting controversial unsafety and harmful products, wasteful spending and exploiting women and kids. Therefore, this research attempts to examine between previous examinations of advertising standardisation in ancient era and current practices in the market. This paper is based on content analysis of the literature. The results show that there are a bridge gap between the implementation of practices as the advent in industrial 4.0 in using digital advertising by food industry. Thus, this paper is able to recognize the differences between two era and significant in determining the best practices in advertising by following Islamic principles.Keywords: Islamic advertising, unethical advertising, ethical advertising, Islamic principles
Procedia PDF Downloads 15011325 Climate-Smart Agriculture Technologies and Determinants of Farmers’ Adoption Decisions in the Great Rift Valley of Ethiopia
Authors: Theodrose Sisay, Kindie Tesfaye, Mengistu Ketema, Nigussie Dechassa, Mezegebu Getnet
Abstract:
Agriculture is a sector that is very vulnerable to the effects of climate change and contributes to anthropogenic greenhouse gas (GHG) emissions in the atmosphere. By lowering emissions and adjusting to the change, it can also help to reduce climate change. Utilizing Climate-Smart Agriculture (CSA) technology that can sustainably boost productivity, improve resilience, and lower GHG emissions is crucial. This study sought to identify the CSA technologies used by farmers and assess adoption levels and factors that influence them. In order to gather information from 384 smallholder farmers in the Great Rift Valley (GRV) of Ethiopia, a cross-sectional survey was carried out. Data were analysed using percentage, chi-square test, t-test, and multivariate probit model. Results showed that crop diversification, agroforestry, and integrated soil fertility management were the most widely practiced technologies. The results of the Chi-square and t-tests showed that there are differences and significant and positive connections between adopters and non-adopters based on various attributes. The chi-square and t-test results confirmed that households who were older had higher incomes, greater credit access, knowledge of the climate, better training, better education, larger farms, higher incomes, and more frequent interactions with extension specialists had a positive and significant association with CSA technology adopters. The model result showed that age, sex, and education of the head, farmland size, livestock ownership, income, access to credit, climate information, training, and extension contact influenced the selection of CSA technologies. Therefore, effective action must be taken to remove barriers to the adoption of CSA technologies, and taking these adoption factors into account in policy and practice is anticipated to support smallholder farmers in adapting to climate change while lowering emissions.Keywords: climate change, climate-smart agriculture, smallholder farmers, multivariate probit model
Procedia PDF Downloads 12711324 How to Motivate Child to Loose Weight When He Is Not Aware That the Overweight Is a Real Problem: «KeepHealthyKids», Study Perspectives
Authors: Daria Druzhinenko- Silhan, Patrick Schmoll
Abstract:
Childhood obesity is one of the important problem in domain of health care. During two recent decades we are observing a real epidemic of this noninfectious illness. Its consequences are hard: cardio-vascular disease; diabetes; arthrosis etc. (OMS, 2012) Keep Healthy Kids » study aims to create a new system of accompanying of childhood obesity based on new technologies as mobile applications or serious video-games. We realize a support-study which aims to understand motivations, psychological dynamite and family's impact on weight-loss process in childhood. Sample: 65 children from 7 to 10 years old accompanied by special Care Center in France. Methodology: we proceed by an innovative approach that bases on quantitative and qualitative methods of data collection. We focus our proposal on data collected from medical files. We are also realizing individual assessment (still ongoing) that aims to understand psychological profiles of obese children and their family dynamic. Results: Only 16,9% of children asked for medical accompanying of obesity. We noted that the most important reason to come to the care Center was the fact of mates' scoffs (46,2%°), the second one was the appearance or look (40 %). We found out that the self-image of these children in self-evaluation questionnaire was described mostly as rather good (46,2) or good (28,2%); the most part of children evaluated their well-being as rather good (29,7%) or good (51,4%). In interviews children had tendency to not recall why they came to the Care Center. Discussion : These results permit us to make a hypothesis that children suffering of overweight or obesity are not clearly aware why they must loose weight. It was rather the peer environment that pointed out the problem of overweight for them. So the motivation to loose weight is mostly supported by environment. We suppose that it is a « weak-point » of their motivation and it can be over-come using serious video-games supporting physical activity that can make deviate the motivation from « to loose weight for be looked better by the others » into « have fun and feeling me better ».Keywords: childhood obesity, motivation, weight-loss, serious video-game
Procedia PDF Downloads 31011323 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 16911322 “Post-Industrial” Journalism as a Creative Industry
Authors: Lynette Sheridan Burns, Benjamin J. Matthews
Abstract:
The context of post-industrial journalism is one in which the material circumstances of mechanical publication have been displaced by digital technologies, increasing the distance between the orthodoxy of the newsroom and the culture of journalistic writing. Content is, with growing frequency, created for delivery via the internet, publication on web-based ‘platforms’ and consumption on screen media. In this environment, the question is not ‘who is a journalist?’ but ‘what is journalism?’ today. The changes bring into sharp relief new distinctions between journalistic work and journalistic labor, providing a key insight into the current transition between the industrial journalism of the 20th century, and the post-industrial journalism of the present. In the 20th century, the work of journalists and journalistic labor went hand-in-hand as most journalists were employees of news organizations, whilst in the 21st century evidence of a decoupling of ‘acts of journalism’ (work) and journalistic employment (labor) is beginning to appear. This 'decoupling' of the work and labor that underpins journalism practice is far reaching in its implications, not least for institutional structures. Under these conditions we are witnessing the emergence of expanded ‘entrepreneurial’ journalism, based on smaller, more independent and agile - if less stable - enterprise constructs that are a feature of creative industries. Entrepreneurial journalism is realized in a range of organizational forms from social enterprise, through to profit driven start-ups and hybrids of the two. In all instances, however, the primary motif of the organization is an ideological definition of journalism. An example is the Scoop Foundation for Public Interest Journalism in New Zealand, which owns and operates Scoop Publishing Limited, a not for profit company and social enterprise that publishes an independent news site that claims to have over 500,000 monthly users. Our paper demonstrates that this journalistic work meets the ideological definition of journalism; conducted within the creative industries using an innovative organizational structure that offers a new, viable post-industrial future for journalism.Keywords: creative industries, digital communication, journalism, post industrial
Procedia PDF Downloads 28011321 CTHTC: A Convolution-Backed Transformer Architecture for Temporal Knowledge Graph Embedding with Periodicity Recognition
Authors: Xinyuan Chen, Mohd Nizam Husen, Zhongmei Zhou, Gongde Guo, Wei Gao
Abstract:
Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention for its enormous value; however, existing models lack capabilities to capture both local interactions and global dependencies simultaneously with evolutionary dynamics, while the latest achievements in convolutions and Transformers haven't been employed in this area. What’s more, periodic patterns in TKGs haven’t been fully explored either. To this end, a multi-stage hybrid architecture with convolution-backed Transformers is introduced in TKGC tasks for the first time combining the Hawkes process to model evolving event sequences in a continuous-time domain. In addition, the seasonal-trend decomposition is adopted to identify periodic patterns. Experiments on six public datasets are conducted to verify model effectiveness against state-of-the-art (SOTA) methods. An extensive ablation study is carried out accordingly to evaluate architecture variants as well as the contributions of independent components in addition, paving the way for further potential exploitation. Besides complexity analysis, input sensitivity and safety challenges are also thoroughly discussed for comprehensiveness with novel methods.Keywords: temporal knowledge graph completion, convolution, transformer, Hawkes process, periodicity
Procedia PDF Downloads 7811320 The Moderating Role of Payment Platform Applications’ Relationship with Increasing Purchase Intention Among Customers in Kuwait - Unified Theory of Acceptance and Sustainable Use of Technology Model
Authors: Ahmad Alsaber
Abstract:
This paper aims to understand the intermediary role of the payment platform applications by analyzing the various factors that can influence the desirability of utilizing said payment services in Kuwait, as well as to determine the effect of the presence of different types of payment platforms on the variables of the “Unified Theory of Acceptance and Use of Technology” (UTAUT) model. The UTAUT model's findings will provide an important understanding of the moderating role of payment platform mobile applications. This study will explore the influence of payment platform mobile applications on customer purchase intentions in Kuwait by employing a quantitative survey of 200 local customers. Questions will cover their usage of payment platforms, purchase intent, and overall satisfaction. The information gathered is then analyzed using descriptive statistics and correlation analysis in order to gain insights. The research hopes to provide greater insight into the effect of mobile payment platforms on customer purchase intentions in Kuwait. This research will provide important implications to marketers and customer service providers, informing their strategies and initiatives, as well as offer recommendations to payment platform providers on how to improve customer satisfaction and security. The study results suggest that the likelihood of a purchase is affected by performance expectancy, effort expectancy, social influence, risk, and trust. The purpose of this research is to understand the advancements in the different variables that Kuwaiti customers consider while dealing with mobile banking applications. With the implementation of stronger security measures, progressively more payment platform applications are being utilized in the Kuwaiti marketplace, making them more desirable with their accessibility and usability. With the development of the Kuwaiti digital economy, it is expected that mobile banking will have a greater impact on banking transactions and services in the future.Keywords: purchase intention, UTAUT, performance expectancy, social influence, risk, trust
Procedia PDF Downloads 11711319 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 22311318 Analytical Characterization of TiO2-Based Nanocoatings for the Protection and Preservation of Architectural Calcareous Stone Monuments
Authors: Sayed M. Ahmed, Sawsan S. Darwish, Mahmoud A. Adam, Nagib A. Elmarzugi, Mohammad A. Al-Dosari, Nadia A. Al-Mouallimi
Abstract:
Historical stone surfaces and architectural heritage especially which located in open areas may undergo unwanted changes due to the exposure to many physical and chemical deterioration factors, air pollution, soluble salts, Rh/temperature, and biodeterioration are the main causes of decay of stone building materials. The development and application of self-cleaning treatments on historical and architectural stone surfaces could be a significant improvement in conservation, protection, and maintenance of cultural heritage. In this paper, nanometric titanium dioxide has become a promising photocatalytic material owing to its ability to catalyze the complete degradation of many organic contaminants and represent an appealing way to create self-cleaning surfaces, thus limiting maintenance costs, and to promote the degradation of polluting agents. The obtained nano-TiO2 coatings were applied on travertine (Marble and limestone often used in historical and monumental buildings). The efficacy of the treatments has been evaluated after coating and artificial thermal aging, through capillary water absorption, Ultraviolet-light exposure to evaluate photo-induced and the hydrophobic effects of the coated surface, while the surface morphology before and after treatment was examined by scanning electron microscopy (SEM). The changes of molecular structure occurring in treated samples were spectroscopy studied by FTIR-ATR, and Colorimetric measurements have been performed to evaluate the optical appearance. All the results get together with the apparent effect that coated TiO2 nanoparticles is an innovative method, which enhanced the durability of stone surfaces toward UV aging, improved their resistance to relative humidity and temperature, self-cleaning photo-induced effects are well evident, and no alteration of the original features.Keywords: architectural calcareous stone monuments, coating, photocatalysis TiO2, self-cleaning, thermal aging
Procedia PDF Downloads 25411317 Ta(l)king Pictures: Development of an Educational Program (SELVEs) for Adolescents Combining Social-Emotional Learning and Photography Taking
Authors: Adi Gielgun-Katz, Alina S. Rusu
Abstract:
In the last two decades, education systems worldwide have integrated new pedagogical methods and strategies in lesson plans, such as innovative technologies, social-emotional learning (SEL), gamification, mixed learning, multiple literacies, and many others. Visual language, such as photographs, is known to transcend cultures and languages, and it is commonly used by youth to express positions and affective states in social networks. Therefore, visual language needs more educational attention as a linguistic and communicative component that can create connectedness among the students and their teachers. Nowadays, when SEL is gaining more and more space and meaning in the area of academic improvement in relation to social well-being, and taking and sharing pictures is part of the everyday life of the majority of people, it becomes natural to add the visual language to SEL approach as a reinforcement strategy for connecting education to the contemporary culture and language of the youth. This article presents a program conducted in a high school class in Israel, which combines the five SEL with photography techniques, i.e., Social-Emotional Learning Visual Empowerments (SELVEs) program (experimental group). Another class of students from the same institution represents the control group, which is participating in the SEL program without the photography component. The SEL component of the programs addresses skills such as: troubleshooting, uncertainty, personal strengths and collaboration, accepting others, control of impulses, communication, self-perception, and conflict resolution. The aim of the study is to examine the effects of programs on the level of the five SEL aspects in the two groups of high school students: Self-Awareness, Social Awareness, Self-Management, Responsible Decision Making, and Relationship Skills. The study presents a quantitative assessment of the SEL programs’ impact on the students. The main hypothesis is that the students’ questionnaires' analysis will reveal a better understanding and improvement of the five aspects of the SEL in the group of students involved in the photography-enhanced SEL program.Keywords: social-emotional learning, photography, education program, adolescents
Procedia PDF Downloads 8511316 A Robust Optimization Method for Service Quality Improvement in Health Care Systems under Budget Uncertainty
Authors: H. Ashrafi, S. Ebrahimi, H. Kamalzadeh
Abstract:
With the development of business competition, it is important for healthcare providers to improve their service qualities. In order to improve service quality of a clinic, four important dimensions are defined: tangibles, responsiveness, empathy, and reliability. Moreover, there are several service stages in hospitals such as financial screening and examination. One of the most challenging limitations for improving service quality is budget which impressively affects the service quality. In this paper, we present an approach to address budget uncertainty and provide guidelines for service resource allocation. In this paper, a service quality improvement approach is proposed which can be adopted to multistage service processes to improve service quality, while controlling the costs. A multi-objective function based on the importance of each area and dimension is defined to link operational variables to service quality dimensions. The results demonstrate that our approach is not ultra-conservative and it shows the actual condition very well. Moreover, it is shown that different strategies can affect the number of employees in different stages.Keywords: allocation, budget uncertainty, healthcare resource, service quality assessment, robust optimization
Procedia PDF Downloads 18511315 Stress Analysis of a Pressurizer in a Pressurized Water Reactor Using Finite Element Method
Authors: Tanvir Hasan, Minhaz Uddin, Anwar Sadat Anik
Abstract:
A pressurizer is a safety-related reactor component that maintains the reactor operating pressure to guarantee safety. Its structure is usually made of high thermal and pressure resistive material. The mechanical structure of these components should be maintained in all working settings, including transient to severe accidents conditions. The goal of this study is to examine the structural integrity and stress of the pressurizer in order to ensure its design integrity towards transient situations. For this, the finite element method (FEM) was used to analyze the mechanical stress on pressurizer components in this research. ANSYS MECHANICAL tool was used to analyze a 3D model of the pressurizer. The material for the body and safety relief nozzle is selected as low alloy steel i.e., SA-508 Gr.3 Cl.2. The model was put into ANSYS WORKBENCH and run under the boundary conditions of (internal Pressure, -17.2 MPa, inside radius, -1348mm, the thickness of the shell, -127mm, and the ratio of the outside radius to an inside radius, - 1.059). The theoretical calculation was done using the formulas and then the results were compared with the simulated results. When stimulated at design conditions, the findings revealed that the pressurizer stress analysis completely fulfilled the ASME standards.Keywords: pressurizer, stress analysis, finite element method, nuclear reactor
Procedia PDF Downloads 15811314 Effect of Green Coffee Bean Extract on Gentamicin Induced Acute Renal Failure in Rats
Authors: Amina Unis, Samah S. El Basateeny, Noha A. H. Nassef
Abstract:
Introduction: Acute Renal Failure (ARF) is one of the most common problems encountered in hospitalized critically ill patients. In recent years great effort has been focused on the introduction of herbal medicine as a novel therapeutic agent for prevention of ARF. Hence, the current study was designed to investigate the effect of Green Coffee Bean Extract (GCBE) on gentamicin induced ARF in rats. Methods: The study was conducted on 60 male rats divided into six equal groups. Group 1 served as normal control group and GCBE was administered for 7 days at a dose of 20 mg/kg/day in group 2 and 40 mg/kg/day in group 3 to test the effect of GCBE on normal kidneys. ARF was induced by a daily intraperitoneal injection of gentamicin (80 mg/kg) for 7 days in group 4 (model group), group 5 (GCBE 20 mg/kg/day) and group 6 (GCBE 20 mg/kg/day). All rats were sacrificed after 7 days and blood was withdrawn for kidney function tests. Kidneys were removed for determination of renal oxidative stress markers and histopathological examination. Results: The present study showed that rats that received oral GCBE for 7 days without induction of ARF showed no significant change in all the assessed parameters in comparison to the normal control group, while rats in the groups that received oral GCBE for 7 days with induction of ARF showed a significant improvement in kidney functions tests (decrease in serum urea, serum creatinine, and blood urea nitrogen) when compared to the ARF model group. Moreover, there was significant amelioration in renal oxidative stress markers (renal malondialdehyde, renal superoxide dismutase) and renal histopathological changes in the GCBE treated groups along induction of ARF when compared to ARF model group. The most significant improvement was reported in the group where GCBE was administered for 7 days in a dose 40 mg/kg/day, along with induction of ARF. Conclusion: GCBE has a potential role in ameliorating renal damage involved in ARF mostly through its antioxidant effect.Keywords: green coffee bean extract, gentamicin, acute renal failure, pharmacology
Procedia PDF Downloads 29211313 AI-Enhanced Self-Regulated Learning: Proposing a Comprehensive Model with 'Studium' to Meet a Student-Centric Perspective
Authors: Smita Singh
Abstract:
Objective: The Faculty of Chemistry Education at Humboldt University has developed ‘Studium’, a web application designed to enhance long-term self-regulated learning (SRL) and academic achievement. Leveraging advanced generative AI, ‘Studium’ offers a dynamic and adaptive educational experience tailored to individual learning preferences and languages. The application includes evolving tools for personalized notetaking from preferred sources, customizable presentation capabilities, and AI-assisted guidance from academic documents or textbooks. It also features workflow automation and seamless integration with collaborative platforms like Miro, powered by AI. This study aims to propose a model that combines generative AI with traditional features and customization options, empowering students to create personalized learning environments that effectively address the challenges of SRL. Method: To achieve this, the study included graduate and undergraduate students from diverse subject streams, with 15 participants each from Germany and India, ensuring a diverse educational background. An exploratory design was employed using a speed dating method with enactment, where different scenario sessions were created to allow participants to experience various features of ‘Studium’. The session lasted for 50 minutes, providing an in-depth exploration of the platform's capabilities. Participants interacted with Studium’s features via Zoom conferencing and were then engaged in semi-structured interviews lasting 10-15 minutes to gain deeper insights into the effectiveness of ‘Studium’. Additionally, online questionnaire surveys were conducted before and after the session to gather feedback and evaluate satisfaction with self-regulated learning (SRL) after using ‘Studium’. The response rate of this survey was 100%. Results: The findings of this study indicate that students widely acknowledged the positive impact of ‘Studium’ on their learning experience, particularly its adaptability and intuitive design. They expressed a desire for more tools like ‘Studium’ to support self-regulated learning in the future. The application significantly fostered students' independence in organizing information and planning study workflows, which in turn enhanced their confidence in mastering complex concepts. Additionally, ‘Studium’ promoted strategic decision-making and helped students overcome various learning challenges, reinforcing their self-regulation, organization, and motivation skills. Conclusion: This proposed model emphasizes the need for effective integration of personalized AI tools into active learning and SRL environments. By addressing key research questions, our framework aims to demonstrate how AI-assisted platforms like “Studium” can facilitate deeper understanding, maintain student motivation, and support the achievement of academic goals. Thus, our ideal model for AI-assisted educational platforms provides a strategic approach to enhance student's learning experiences and promote their development as self-regulated learners. This proposed model emphasizes the need for effective integration of personalized AI tools into active learning and SRL environments. By addressing key research questions, our framework aims to demonstrate how AI-assisted platforms like ‘Studium’ can facilitate deeper understanding, maintain student motivation, and support the achievement of academic goals. Thus, our ideal model for AI-assisted educational platforms provides a strategic approach to enhance student's learning experiences and promote their development as self-regulated learners.Keywords: self-regulated learning (SRL), generative AI, AI-assisted educational platforms
Procedia PDF Downloads 2911312 Towards the Management of Cybersecurity Threats in Organisations
Authors: O. A. Ajigini, E. N. Mwim
Abstract:
Cybersecurity is the protection of computers, programs, networks, and data from attack, damage, unauthorised, unintended access, change, or destruction. Organisations collect, process and store their confidential and sensitive information on computers and transmit this data across networks to other computers. Moreover, the advent of internet technologies has led to various cyberattacks resulting in dangerous consequences for organisations. Therefore, with the increase in the volume and sophistication of cyberattacks, there is a need to develop models and make recommendations for the management of cybersecurity threats in organisations. This paper reports on various threats that cause malicious damage to organisations in cyberspace and provides measures on how these threats can be eliminated or reduced. The paper explores various aspects of protection measures against cybersecurity threats such as handling of sensitive data, network security, protection of information assets and cybersecurity awareness. The paper posits a model and recommendations on how to manage cybersecurity threats in organisations effectively. The model and the recommendations can then be utilised by organisations to manage the threats affecting their cyberspace. The paper provides valuable information to assist organisations in managing their cybersecurity threats and hence protect their computers, programs, networks and data in cyberspace. The paper aims to assist organisations to protect their information assets and data from cyberthreats as part of the contributions toward community engagement.Keywords: confidential information, cyberattacks, cybersecurity, cyberspace, sensitive information
Procedia PDF Downloads 25911311 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 16811310 Educational Deprivation and Their Determinants in India: Evidence from National Sample Survey
Authors: Mukesh Ranjan
Abstract:
Applying probit model on the micro data of NSS 71st round on education for understanding the access to education post the passage of Right to Education act,2009 in India. The empirical analysis shows that at all India level the mean age of enrollment in school is 5.5 years and drop-out age is around 14 years (or studied up to class 7) and around 60 percent females never get enrolled in any school in their lifetime. Nearly 20 percent children in Bihar never seen school and surprisingly, the relatively developed states like Gujarat, Maharashtra, Karnataka, Kerala and Tamil Nadu have more than one-third of the children and half of the children in Andhra Pradesh, West Bengal and Orissa as educationally wasted. The relative contribution in educational wastage is maximum by Bengal (10 %) while UP contributed a maximum of 30 % in educational non-enrollment in the country. Educational wastage is more likely to increase with age. Marriage is a resistive factor in getting education. Muslims are educationally more deprived than Hindus. Larger family and rich household are less likely to be educationally deprived. Major reasons for drop-out until 9 years were lack of interest in education and financial constraint; between 10-12 years, lack of interest and unable to cope up with studies and post 12 years financial constraint, marriage and other household reasons.Keywords: probit model, educational wastage, educational non-enrollment, educational deprivation
Procedia PDF Downloads 30511309 Geotechnical Characterization of Residual Soil for Deterministic Landslide Assessment
Authors: Vera Karla S. Caingles, Glen A. Lorenzo
Abstract:
Soil, as the main material of landslides, plays a vital role in landslide assessment. An efficient and accurate method of doing an assessment is significantly important to prevent damage of properties and loss of lives. The study has two phases: to establish an empirical correlation of the residual soil thickness with the slope angle and to investigate the geotechnical characteristics of residual soil. Digital Elevation Model (DEM) in Geographic Information System (GIS) was used to establish the slope map and to program sampling points for field investigation. Physical and index property tests were undertaken on the 20 soil samples obtained from the area with Pliocene-Pleistocene geology and different slope angle in Kibawe, Bukidnon. The regression analysis result shows that the best fitting model that can describe the soil thickness-slope angle relationship is an exponential function. The physical property results revealed that soils contain a high percentage of clay and silts ranges from 41% - 99.52%. Based on the index properties test results, the soil exhibits a high degree of plasticity and expansion but not collapsible. It is deemed that this compendium will serve as primary data for slope stability analysis and deterministic landslide assessment.Keywords: collapsibility, correlation, expansiveness, landslide, plasticity
Procedia PDF Downloads 16011308 Reliability-based Condition Assessment of Offshore Wind Turbines using SHM data
Authors: Caglayan Hizal, Hasan Emre Demirci, Engin Aktas, Alper Sezer
Abstract:
Offshore wind turbines consist of a long slender tower with a heavy fixed mass on the top of the tower (nacelle), together with a heavy rotating mass (blades and hub). They are always subjected to environmental loads including wind and wave loads in their service life. This study presents a three-stage methodology for reliability-based condition assessment of offshore wind-turbines against the seismic, wave and wind induced effects considering the soil-structure interaction. In this context, failure criterions are considered as serviceability limits of a monopile supporting an Offshore Wind Turbine: (a) allowable horizontal displacement at pile head should not exceed 0.2 m, (b) rotations at pile head should not exceed 0.5°. A Bayesian system identification framework is adapted to the classical reliability analysis procedure. Using this framework, a reliability assessment can be directly implemented to the updated finite element model without performing time-consuming methods. For numerical verification, simulation data of the finite model of a real offshore wind-turbine structure is investigated using the three-stage methodology.Keywords: Offshore wind turbines, SHM, reliability assessment, soil-structure interaction
Procedia PDF Downloads 53211307 Seismic Response of Belt Truss System in Regular RC Frame Structure at the Different Positions of the Storey
Authors: Mohd Raish Ansari, Tauheed Alam Khan
Abstract:
This research paper is a comparative study of the belt truss in the Regular RC frame structure at the different positions of the floor. The method used in this research is the response spectrum method with the help of the ETABS Software, there are six models in this paper with belt truss. The Indian standard code used in this work are IS 456:2000, IS 800:2007, IS 875 part-1, IS 875 part-1, and IS 1893 Part-1:2016. The cross-section of the belt truss is the I-section, a grade of steel that is made up of Mild Steel. The basic model in this research paper is the same, only position of the belt truss is going to change, and the dimension of the belt truss is remain constant for all models. The plan area of all models is 24.5 meters x 28 meters, and the model has G+20, where the height of the ground floor is 3.5 meters, and all floor height is 3.0 meters remains constant. This comparative research work selected some important seismic parameters to check the stability of all models, the parameters are base shear, fundamental period, storey overturning moment, and maximum storey displacement.Keywords: belt truss, RC frames structure, ETABS, response spectrum analysis, special moment resisting frame
Procedia PDF Downloads 9311306 Digital Architectural Practice as a Challenge for Digital Architectural Technology Elements in the Era of Digital Design
Authors: Ling Liyun
Abstract:
In the field of contemporary architecture, complex forms of architectural works continue to emerge in the world, along with some new terminology emerged: digital architecture, parametric design, algorithm generation, building information modeling, CNC construction and so on. Architects gradually mastered the new skills of mathematical logic in the form of exploration, virtual simulation, and the entire design and coordination in the construction process. Digital construction technology has a greater degree in controlling construction, and ensure its accuracy, creating a series of new construction techniques. As a result, the use of digital technology is an improvement and expansion of the practice of digital architecture design revolution. We worked by reading and analyzing information about the digital architecture development process, a large number of cases, as well as architectural design and construction as a whole process. Thus current developments were introduced and discussed in our paper, such as architectural discourse, design theory, digital design models and techniques, material selecting, as well as artificial intelligence space design. Our paper also pays attention to the representative three cases of digital design and construction experiment at great length in detail to expound high-informatization, high-reliability intelligence, and high-technique in constructing a humane space to cope with the rapid development of urbanization. We concluded that the opportunities and challenges of the shift existed in architectural paradigms, such as the cooperation methods, theories, models, technologies and techniques which were currently employed in digital design research and digital praxis. We also find out that the innovative use of space can gradually change the way people learn, talk, and control information. The past two decades, digital technology radically breaks the technology constraints of industrial technical products, digests the publicity on a particular architectural style (era doctrine). People should not adapt to the machine, but in turn, it’s better to make the machine work for users.Keywords: artificial intelligence, collaboration, digital architecture, digital design theory, material selection, space construction
Procedia PDF Downloads 13611305 Price Compensation Mechanism with Unmet Demand for Public-Private Partnership Projects
Abstract:
Public-private partnership (PPP), as an innovative way to provide infrastructures by the private sector, is being widely used throughout the world. Compared with the traditional mode, PPP emerges largely for merits of relieving public budget constraint and improving infrastructure supply efficiency by involving private funds. However, PPP projects are characterized by large scale, high investment, long payback period, and long concession period. These characteristics make PPP projects full of risks. One of the most important risks faced by the private sector is demand risk because many factors affect the real demand. If the real demand is far lower than the forecasting demand, the private sector will be got into big trouble because operating revenue is the main means for the private sector to recoup the investment and obtain profit. Therefore, it is important to study how the government compensates the private sector when the demand risk occurs in order to achieve Pareto-improvement. This research focuses on price compensation mechanism, an ex-post compensation mechanism, and analyzes, by mathematical modeling, the impact of price compensation mechanism on payoff of the private sector and consumer surplus for PPP toll road projects. This research first investigates whether or not price compensation mechanisms can obtain Pareto-improvement and, if so, then explores boundary conditions for this mechanism. The research results show that price compensation mechanism can realize Pareto-improvement under certain conditions. Especially, to make the price compensation mechanism accomplish Pareto-improvement, renegotiation costs of the government and the private sector should be lower than a certain threshold which is determined by marginal operating cost and distortionary cost of the tax. In addition, the compensation percentage should match with the price cut of the private investor when demand drops. This research aims to provide theoretical support for the government when determining compensation scope under the price compensation mechanism. Moreover, some policy implications can also be drawn from the analysis for better risk-sharing and sustainability of PPP projects.Keywords: infrastructure, price compensation mechanism, public-private partnership, renegotiation
Procedia PDF Downloads 17911304 Analysis of Road Network Vulnerability Due to Merapi Volcano Eruption
Authors: Imam Muthohar, Budi Hartono, Sigit Priyanto, Hardiansyah Hardiansyah
Abstract:
The eruption of Merapi Volcano in Yogyakarta, Indonesia in 2010 caused many casualties due to minimum preparedness in facing disaster. Increasing population capacity and evacuating to safe places become very important to minimize casualties. Regional government through the Regional Disaster Management Agency has divided disaster-prone areas into three parts, namely ring 1 at a distance of 10 km, ring 2 at a distance of 15 km and ring 3 at a distance of 20 km from the center of Mount Merapi. The success of the evacuation is fully supported by road network infrastructure as a way to rescue in an emergency. This research attempts to model evacuation process based on the rise of refugees in ring 1, expanded to ring 2 and finally expanded to ring 3. The model was developed using SATURN (Simulation and Assignment of Traffic to Urban Road Networks) program version 11.3. 12W, involving 140 centroid, 449 buffer nodes, and 851 links across Yogyakarta Special Region, which was aimed at making a preliminary identification of road networks considered vulnerable to disaster. An assumption made to identify vulnerability was the improvement of road network performance in the form of flow and travel times on the coverage of ring 1, ring 2, ring 3, Sleman outside the ring, Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul. The research results indicated that the performance increase in the road networks existing in the area of ring 2, ring 3, and Sleman outside the ring. The road network in ring 1 started to increase when the evacuation was expanded to ring 2 and ring 3. Meanwhile, the performance of road networks in Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul during the evacuation period simultaneously decreased in when the evacuation areas were expanded. The results of preliminary identification of the vulnerability have determined that the road networks existing in ring 1, ring 2, ring 3 and Sleman outside the ring were considered vulnerable to the evacuation of Mount Merapi eruption. Therefore, it is necessary to pay a great deal of attention in order to face the disasters that potentially occur at anytime.Keywords: model, evacuation, SATURN, vulnerability
Procedia PDF Downloads 17011303 A Shift-Share Analysis: Manufacturing Employment Specialisation at uMhlathuze Local Municipality, South Africa
Authors: Mlondi Ndovela
Abstract:
Globally, the manufacturing employment has been declining and the South African manufacturing sector experiences the very same trend. Despite the commonality between the global and South African manufacturing trend, there is an understanding that local areas provide distinct contributions to the provincial/national economy. Therefore, the growth/decline of a particular manufacturing division in one local area may not be evident in another area since economic performances vary from region to region. In view of the above, the study employed the Esteban-Marquillas model of shift-share analysis (SSA) to conduct an empirical analysis of manufacturing employment performance at uMhlathuze Local Municipality in the KwaZulu-Natal province. The study set out two objectives; those are, to quantify uMhlathuze manufacturing jobs that are attributed to the provincial manufacturing employment trends and identify manufacturing divisions are growing/declining in terms of employment. To achieve these objectives, the study sampled manufacturing employment data from 2010 to 2017 and this data was categorised into ten manufacturing divisions. Furthermore, the Esteban-Marquillas model calculated manufacturing employment in terms of two effects, namely; provincial growth effect (PGE) and industrial mix effect (IME). The results show that even though uMhlathuze manufacturing sector has a positive PGE (+230), the municipality performed poorly in terms of IME (-291). A further analysis included other economic sectors of the municipality to draw employment performance comparison and the study found that agriculture; construction; trade, catering and accommodation; and transport, storage and communication, performed well above manufacturing sector in terms of PGE (+826) and IME (+532). This suggests that uMhlathuze manufacturing sector is not necessarily declining; however, other economic sectors are growing faster and bigger than it is, therefore, reducing the employment share of the manufacturing sector. To promote manufacturing growth from a policy standpoint, the government could create favourable macroeconomic policies such as import substitution policies and support labour-intensive manufacturing divisions. As a result, these macroeconomic policies can help to protect local manufacturing firms and stimulate the growth of manufacturing employment.Keywords: allocation effect, Esteban-Marquillas model, manufacturing employment, regional competitive effect, shift-share analysis
Procedia PDF Downloads 14111302 Selection of Designs in Ordinal Regression Models under Linear Predictor Misspecification
Authors: Ishapathik Das
Abstract:
The purpose of this article is to find a method of comparing designs for ordinal regression models using quantile dispersion graphs in the presence of linear predictor misspecification. The true relationship between response variable and the corresponding control variables are usually unknown. Experimenter assumes certain form of the linear predictor of the ordinal regression models. The assumed form of the linear predictor may not be correct always. Thus, the maximum likelihood estimates (MLE) of the unknown parameters of the model may be biased due to misspecification of the linear predictor. In this article, the uncertainty in the linear predictor is represented by an unknown function. An algorithm is provided to estimate the unknown function at the design points where observations are available. The unknown function is estimated at all points in the design region using multivariate parametric kriging. The comparison of the designs are based on a scalar valued function of the mean squared error of prediction (MSEP) matrix, which incorporates both variance and bias of the prediction caused by the misspecification in the linear predictor. The designs are compared using quantile dispersion graphs approach. The graphs also visually depict the robustness of the designs on the changes in the parameter values. Numerical examples are presented to illustrate the proposed methodology.Keywords: model misspecification, multivariate kriging, multivariate logistic link, ordinal response models, quantile dispersion graphs
Procedia PDF Downloads 39311301 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia
Authors: Zeinu Ahmed Rabba, Derek D Stretch
Abstract:
Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase
Procedia PDF Downloads 28611300 Requirements Engineering via Controlling Actors Definition for the Organizations of European Critical Infrastructure
Authors: Jiri F. Urbanek, Jiri Barta, Oldrich Svoboda, Jiri J. Urbanek
Abstract:
The organizations of European and Czech critical infrastructure have specific position, mission, characteristics and behaviour in European Union and Czech state/ business environments, regarding specific requirements for regional and global security environments. They must respect policy of national security and global rules, requirements and standards in all their inherent and outer processes of supply-customer chains and networks. A controlling is generalized capability to have control over situational policy. This paper aims and purposes are to introduce the controlling as quite new necessary process attribute providing for critical infrastructure is environment the capability and profit to achieve its commitment regarding to the effectiveness of the quality management system in meeting customer/ user requirements and also the continual improvement of critical infrastructure organization’s processes overall performance and efficiency, as well as its societal security via continual planning improvement via DYVELOP modelling.Keywords: added value, DYVELOP, controlling, environments, process approach
Procedia PDF Downloads 41211299 Adsorptive Desulfurization of Tire Pyrolytic Oil Using Cu(I)–Y Zeolite via π-Complexation
Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng
Abstract:
The accelerating requirement to reach 0% sulfur content in liquid fuels demands researchers to seek efficient alternative technologies to challenge the predicament. In this current study, the adsorption capabilities of modified Cu(I)-Y zeolite were tested for removal of organosulfur compounds (OSC) present in TPO. The π-complexation-based adsorbent was obtained by ion exchanging Y-zeolite with Cu+ cation using liquid phase ion exchange (LPIE). Preparation of the adsorbent involved firstly ion-exchange between Na-Y zeolite with a Cu(NO3)2 aqueous solution of 0.5M for 48 hours followed by reduction of Cu2+ to Cu+. Batch studies for TPO in comparison with model diesel comprising of sulfur compounds such as thiophene (TH), benzothiophene (BTH), dibenzothiophene (DBT) and 4,6-dimethyldibenzothiophe (4,6-DMDBT) showed that modified Cu(I)-Y zeolite is an effective adsorbent for removal of OSC in liquid fuels. The effect of multiple operating conditions such as adsorbent dosage, reaction time and temperature were studied to optimize the process. For model diesel fuel, the selectivity for adsorption of sulfur compounds followed the order 4,6-DMDBT> DBT> BTH> TH. Interpretation of the results was justified using the molecular orbital theory and calculations. Langmuir and Freundlich isotherms were used to predict adsorption of the reaction mixture. The Cu(I)-Y zeolite is fully regeneratable and this is achieved by a simple procedure of blowing the adsorbent with air at 350 °C, followed by reactivation at 450 °C in a rich helium surrounding.Keywords: adsorption, desulfurization, TPO, zeolite
Procedia PDF Downloads 23411298 Investigating the Effect of Refinancing on Financial Behaviour of Energy Efficiency Projects
Authors: Zohreh Soltani, Seyedmohammadhossein Hosseinian
Abstract:
Reduction of energy consumption in built infrastructure, through the installation of energy-efficient technologies, is a major approach to achieving sustainability. In practice, the viability of energy efficiency projects strongly depends on the cost reimbursement and profitability. These projects are subject to failure if the actual cost savings do not reimburse the project cost in a timely manner. In such cases, refinancing could be a solution to benefit from the long-term returns of the project if implemented wisely. However, very little is still known about the effect of refinancing options on financial performance of energy efficiency projects. To fill this gap, the present study investigates the financial behavior of energy efficiency projects with focus on refinancing options, such as Leveraged Loans. A System Dynamics (SD) model is introduced, and the model application is presented using an actual case-study data. The case study results indicate that while high-interest start-ups make using Leveraged Loan inevitable, refinancing can rescue the project and bring about profitability. This paper also presents some managerial implications of refinancing energy efficiency projects based on the case-study analysis. Results of this study help implementing financially viable energy efficiency projects, so the community could benefit from their environmental advantages widely.Keywords: energy efficiency projects, leveraged loan, refinancing, sustainability
Procedia PDF Downloads 39311297 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110