Search results for: integration step
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5369

Search results for: integration step

4199 EU Innovative Economic Priorities, Contemporary Problems and Challenges of Its Formation

Authors: Gechbaia Badri

Abstract:

The paper discusses in today's world of economic globalization and development of innovative economic integration is one of the issues of the day in the world. The article analyzes the innovation economy development trends in EU, showed the innovation economy formation of the main problems and results, also the development of innovative potential of the economy. The author reckons that the European economy will contribute to the development of innovative economic space of speech in recent years developed a financial and economic crisis.

Keywords: European Union, innovative system, innovative development, innovations

Procedia PDF Downloads 302
4198 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS

Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar

Abstract:

Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.

Keywords: steady flow simulation, processing programs, simulation code, inviscid flux

Procedia PDF Downloads 424
4197 VaR Estimation Using the Informational Content of Futures Traded Volume

Authors: Amel Oueslati, Olfa Benouda

Abstract:

New Value at Risk (VaR) estimation is proposed and investigated. The well-known two stages Garch-EVT approach uses conditional volatility to generate one step ahead forecasts of VaR. With daily data for twelve stocks that decompose the Dow Jones Industrial Average (DJIA) index, this paper incorporates the volume in the first stage volatility estimation. Afterwards, the forecasting ability of this conditional volatility concerning the VaR estimation is compared to that of a basic volatility model without considering any trading component. The results are significant and bring out the importance of the trading volume in the VaR measure.

Keywords: Garch-EVT, value at risk, volume, volatility

Procedia PDF Downloads 280
4196 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation

Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma

Abstract:

Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.

Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling

Procedia PDF Downloads 138
4195 Electromyography Analysis during Walking and Seated Stepping in the Elderly

Authors: P. Y. Chiang, Y. H. Chen, Y. J. Lin, C. C. Chang, W. C. Hsu

Abstract:

The number of the elderly in the world population and the rate of falls in this increasing numbers of older people are increasing. Decreasing muscle strength and an increasing risk of falling are associated with the ageing process. Because the effects of seated stepping training on the walking performance in the elderly remain unclear, the main purpose of the proposed study is to perform electromyography analysis during walking and seated stepping in the elderly. Four surface EMG electrodes were sticked on the surface of lower limbs muscles, including vastus lateralis (VL), and gastrocnemius (GT) of both sides. Before test, maximal voluntary contraction (MVC) of the respective muscle was obtained using manual muscle testing. The analog raw data of EMG signals were digitized with a sampling frequency of 2000 Hz. The signals were fully rectified and the linear envelope were calculated. Stepping motion cycle was separated into two phases by stepping timing (ST) and pedal return timing (PRT). ST refer to the time when the pedal marker reached the highest height, representing the contra-lateral leg was going to release the pedal. PRT refer to the time when the pedal marker reached the lowest height, representing the contra-lateral leg was going to step the pedal. We assumed that ST acted the same role in initial contact during walking, and PRT for toe-off. The period from ST to next PRT was called pushing phase (PP), during which the leg would start to step with resistance, and we compare this phase with the stance phase in level walking. The period from PRT to next ST was called returning phase (RP), during which leg would not have any resistance in this phase, and we compare this phase with the swing phase in level walking. VL and Gastro muscular activation had similar patterns in both side. The ability may transfer to those needed during loading response, mid-stance and terminal swing phase. User needed to make more effort in stepping compared with walking with similar timing; thus the strengthening of the VL and Gastro may be helpful to improve the walking endurance and efficiency for the elderly.

Keywords: elderly, electromyography, seated stepping, walking

Procedia PDF Downloads 216
4194 Culture and Internationalization: A Study About Brazilian Companies in Bolivia

Authors: Renato Dias Baptista

Abstract:

The purpose of this paper is to analyze the elements of the cultural dimension in the internationalization process of Brazilian companies in Bolivia. This paper is based on research on two major Brazilian transnational companies which have plants in Bolivia. To achieve the objectives, the interconnective characteristics of culture in the process of productive internationalization were analyzed aiming to highlight it as a guiding element opposite the premises of the Brazilian leadership in the integration and development of the continent. The analysis aims to give relevance to the culture of a country and its relations with internationalization.

Keywords: interculturalism, transnational, internationalization, organizational development

Procedia PDF Downloads 298
4193 Effectiveness with Respect to Time-To-Market and the Impacts of Late-Stage Design Changes in Rapid Development Life Cycles

Authors: Parth Shah

Abstract:

The author examines the recent trend where business organizations are significantly reducing their developmental cycle times to stay competitive in today’s global marketspace. The author proposes a rapid systems engineering framework to address late design changes and allow for flexibility (i.e. to react to unexpected or late changes and its impacts) during the product development cycle using a Systems Engineering approach. A System Engineering approach is crucial in today’s product development to deliver complex products into the marketplace. Design changes can occur due to shortened timelines and also based on initial consumer feedback once a product or service is in the marketplace. The ability to react to change and address customer expectations in a responsive and cost-efficient manner is crucial for any organization to succeed. Past literature, research, and methods such as concurrent development, simultaneous engineering, knowledge management, component sharing, rapid product integration, tailored systems engineering processes, and studies on reducing product development cycles all suggest a research gap exist in specifically addressing late design changes due to the shortening of life cycle environments in increasingly competitive markets. The author’s research suggests that 1) product development cycles time scales are now measured in months instead of years, 2) more and more products have interdepended systems and environments that are fast-paced and resource critical, 3) product obsolesce is higher and more organizations are releasing products and services frequently, and 4) increasingly competitive markets are leading to customization based on consumer feedback. The author will quantify effectiveness with respect to success factors such as time-to-market, return-of-investment, life cycle time and flexibility in late design changes by complexity of product or service, number of late changes and ability to react and reduce late design changes.

Keywords: product development, rapid systems engineering, scalability, systems engineering, systems integration, systems life cycle

Procedia PDF Downloads 202
4192 The Increasing Trend in Research Among Orthopedic Residency Applicants is Significant to Matching: A Retrospective Analysis

Authors: Nickolas A. Stewart, Donald C. Hefelfinger, Garrett V. Brittain, Timothy C. Frommeyer, Adrienne Stolfi

Abstract:

Orthopedic surgery is currently considered one of the most competitive specialties that medical students can apply to for residency training. As evidenced by increasing United States Medical Licensing Examination (USMLE) scores, overall grades, and publication, presentation, and abstract numbers, this specialty is getting increasingly competitive. The recent change of USMLE Step 1 scores to pass/fail has resulted in additional challenges for medical students planning to apply for orthopedic residency. Until now, these scores have been a tool used by residency programs to screen applicants as an initial factor to determine the strength of their application. With USMLE STEP 1 converting to a pass/fail grading criterion, the question remains as to what will take its place on the ERAS application. The primary objective of this study is to determine the trends in the number of research projects, abstracts, presentations, and publications among orthopedic residency applicants. Secondly, this study seeks to determine if there is a relationship between the number of research projects, abstracts, presentations, and publications, and match rates. The researchers utilized the National Resident Matching Program's Charting Outcomes in the Match between 2007 and 2022 to identify mean publications and research project numbers by allopathic and osteopathic US orthopedic surgery senior applicants. A paired t test was performed between the mean number of publications and research projects by matched and unmatched applicants. Additionally, simple linear regressions within matched and unmatched applicants were used to determine the association between year and number of abstracts, presentations, and publications, and a number of research projects. For determining whether the increase in the number of abstracts, presentations, and publications, and a number of research projects is significantly different between matched and unmatched applicants, an analysis of covariance is used with an interaction term added to the model, which represents the test for the difference between the slopes of each group. The data shows that from 2007 to 2022, the average number of research publications increased from 3 to 16.5 for matched orthopedic surgery applicants. The paired t-test had a significant p-value of 0.006 for the number of research publications between matched and unmatched applicants. In conclusion, the average number of publications for orthopedic surgery applicants has significantly increased for matched and unmatched applicants from 2007 to 2022. Moreover, this increase has accelerated in recent years, as evidenced by an increase of only 1.5 publications from 2007 to 2001 versus 5.0 publications from 2018 to 2022. The number of abstracts, presentations, and publications is a significant factor regarding an applicant's likelihood to successfully match into an orthopedic residency program. With USMLE Step 1 being converted to pass/fail, the researchers expect students and program directors will place increased importance on additional factors that can help them stand out. This study demonstrates that research will be a primary component in stratifying future orthopedic surgery applicants. In addition, this suggests the average number of research publications will continue to accelerate. Further study is required to determine whether this growth is sustainable.

Keywords: publications, orthopedic surgery, research, residency applications

Procedia PDF Downloads 128
4191 Effect of Semantic Relational Cues in Action Memory Performance over School Ages

Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf, Kamal Kharazi

Abstract:

Research into long-term memory has demonstrated that the richness of the knowledge base cues in memory tasks improves retrieval process, which in turn influences learning and memory performance. The present research investigated the idea that adding cues connected to knowledge can affect memory performance in the context of action memory in children. In action memory studies, participants are instructed to learn a series of verb–object phrases as verbal learning and experience-based learning (learning by doing and learning by observation). It is well established that executing action phrases is a more memorable way to learn than verbally repeating the phrases, a finding called enactment effect. In the present study, a total of 410 students from four grade groups—2nd, 4th, 6th, and 8th—participated in this study. During the study, participants listened to verbal action phrases (VTs), performed the phrases (SPTs: subject-performed tasks), and observed the experimenter perform the phrases (EPTs: experimenter-performed tasks). During the test phase, cued recall test was administered. Semantic relational cues (i.e., well-integrated vs. poorly integrated items) were manipulated in the present study. In that, the participants were presented two lists of action phrases with high semantic integration between verb and noun, e.g., “write with the pen” and with low semantic integration between verb and noun, e.g., “pick up the glass”. Results revealed that experience-based learning had a better results than verbal learning for both well-integrated and poorly integrated items, though manipulations of semantic relational cues can moderate the enactment effect. In addition, children of different grade groups outperformed for well- than poorly integrated items, in flavour of older children. The results were discussed in relation to the effect of knowledge-based information in facilitating retrieval process in children.

Keywords: action memory, enactment effect, knowledge-based cues, school-aged children, semantic relational cues

Procedia PDF Downloads 273
4190 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory

Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short

Abstract:

Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.

Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling

Procedia PDF Downloads 170
4189 Pros and Cons of Agriculture Investment in Gambella Region, Ethiopia

Authors: Azeb Degife

Abstract:

Over the past few years, the volume of international investment in agricultural land has increased globally. In recent times, Ethiopian government uses agricultural investment as one of the most important and effective strategies for economic growth, food security and poverty reduction in rural areas. Since the mid-2000s, government has awarded millions of hectares of most fertile land to rich countries and some of the world's most wealthy people to export various kinds of crop, often in long-term leases and at bargain prices. This study focuses on the pros and cons of large-scale agriculture investment Gambella region, Ethiopia. The main results were generated both from primary and secondary data sources. Primary data are obtained through interview, direct observation and a focus group discussion (FGDs). The secondary data are obtained from published documents, reports from governmental and non-governmental institutions. The findings of the study demonstrated that agriculture investment has advantages on the socio-economic and disadvantages on socio-environmental aspects. The main benefits agriculture investments in the region are infrastructural development and generation employment for the local people. Further, the Ethiopian government also generates foreign currency from the agriculture investment opportunities. On the other hand, Gambella people are strongly tied to the land and the rivers that run through in the region. However, now large-scale agricultural investment by foreign and local investors on an industrial scale results deprives people livelihoods and natural resources of the region. Generally, the negative effects of agriculture investment include increasing food insecurity, and displacement of smallholder farmers and pastoralists. Moreover, agriculture investment has strong adverse environmental impacts on natural resources such as land, water, forests and biodiversity. Therefore, an Ethiopian government strategy needs to focus on integration approach and sustainable agricultural growth.

Keywords: agriculture investment, cons, displacement, Gambella, integration approach, pros, socio-economic, socio-environmental

Procedia PDF Downloads 334
4188 Bridging Healthcare Information Systems and Customer Relationship Management for Effective Pandemic Response

Authors: Sharda Kumari

Abstract:

As the Covid-19 pandemic continues to leave its mark on the global business landscape, companies have had to adapt to new realities and find ways to sustain their operations amid social distancing measures, government restrictions, and heightened public health concerns. This unprecedented situation has placed considerable stress on both employees and employers, underscoring the need for innovative approaches to manage the risks associated with Covid-19 transmission in the workplace. In response to these challenges, the pandemic has accelerated the adoption of digital technologies, with an increasing preference for remote interactions and virtual collaboration. Customer relationship management (CRM) systems have risen to prominence as a vital resource for organizations navigating the post-pandemic world, providing a range of benefits that include acquiring new customers, generating insightful consumer data, enhancing customer relationships, and growing market share. In the context of pandemic management, CRM systems offer three primary advantages: (1) integration features that streamline operations and reduce the need for multiple, costly software systems; (2) worldwide accessibility from any internet-enabled device, facilitating efficient remote workforce management during a pandemic; and (3) the capacity for rapid adaptation to changing business conditions, given that most CRM platforms boast a wide array of remotely deployable business growth solutions, a critical attribute when dealing with a dispersed workforce in a pandemic-impacted environment. These advantages highlight the pivotal role of CRM systems in helping organizations remain resilient and adaptive in the face of ongoing global challenges.

Keywords: healthcare, CRM, customer relationship management, customer experience, digital transformation, pandemic response, patient monitoring, patient management, healthcare automation, electronic health record, patient billing, healthcare information systems, remote workforce, virtual collaboration, resilience, adaptable business models, integration features, CRM in healthcare, telehealth, pandemic management

Procedia PDF Downloads 99
4187 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction

Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé

Abstract:

One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.

Keywords: input variable disposition, machine learning, optimization, performance, time series prediction

Procedia PDF Downloads 104
4186 Integral Image-Based Differential Filters

Authors: Kohei Inoue, Kenji Hara, Kiichi Urahama

Abstract:

We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.

Keywords: integral images, differential images, differential filters, image fusion

Procedia PDF Downloads 503
4185 Epistemological and Ethical Dimensions of Current Concepts of Human Resilience in the Neurosciences

Authors: Norbert W. Paul

Abstract:

Since a number of years, scientific interest in human resilience is rapidly increasing especially in psychology and more recently and highly visible in neurobiological research. Concepts of resilience are regularly discussed in the light of liminal experiences and existential challenges in human life. Resilience research is providing both, explanatory models and strategies to promote or foster human resilience. Surprisingly, these approaches attracted little attention so far in philosophy in general and in ethics in particular. This is even more astonishing given the fact that the neurosciences as such have been and still are of major interest to philosophy and ethics and even brought about the specialized field of neuroethics, which, however, is not concerned with concepts of resilience, so far. As a result of the little attention given to the topic of resilience, the whole concept has to date been a philosophically under-theorized. This abstinence of ethics and philosophy in resilience research is lamentable because resilience as a concept as well as resilience interventions based on neurobiological findings do undoubtedly pose philosophical, social and ethical questions. In this paper, we will argue that particular notions of resilience are crossing the sometimes fine line between maintaining a person’s mental health despite the impact of severe psychological or physical adverse events and ethically more debatable discourses of enhancement. While we neither argue for or against enhancement nor re-interpret resilience research and interventions by subsuming them strategies of psychological and/or neuro-enhancement, we encourage those who see social or ethical problems with enhancement technologies should also take a closer look on resilience and the related neurobiological concepts. We will proceed in three steps. In our first step, we will describe the concept of resilience in general and its neurobiological study in particular. Here, we will point out some important differences in the way ‘resilience’ is conceptualized and how neurobiological research understands resilience. In what follows we will try to show that a one-sided concept of resilience – as it is often presented in neurobiological research on resilience – does pose social and ethical problems. Secondly, we will identify and explore the social and ethical challenges of (neurobiological) enhancement. In the last and final step of this paper, we will argue that a one-sided reading of resilience can be understood as latent form of enhancement in transition and poses ethical questions similar to those discussed in relation to other approaches to the biomedical enhancement of humans.

Keywords: resilience, neurosciences, epistemology, bioethics

Procedia PDF Downloads 157
4184 Climate Change Adaptation: Methodologies and Tools to Define Resilience Scenarios for Existing Buildings in Mediterranean Urban Areas

Authors: Francesca Nicolosi, Teresa Cosola

Abstract:

Climate changes in Mediterranean areas, such as the increase of average seasonal temperatures, the urban heat island phenomenon, the intensification of solar radiation and the extreme weather threats, cause disruption events, so that climate adaptation has become a pressing issue. Due to the strategic role that the built heritage holds in terms of environmental impact and energy waste and its potentiality, it is necessary to assess the vulnerability and the adaptive capacity of the existing building to climate change, in order to define different mitigation scenarios. The aim of this research work is to define an optimized and integrated methodology for the assessment of resilience levels and adaptation scenarios for existing buildings in Mediterranean urban areas. Moreover, the study of resilience indicators allows us to define building environmental and energy performance in order to identify the design and technological solutions for the improvement of the building and its urban area potentialities. The methodology identifies step-by-step different phases, starting from the detailed study of characteristic elements of urban system: climatic, natural, human, typological and functional components are analyzed in their critical factors and their potential. Through the individuation of the main perturbing factors and the vulnerability degree of the system to the risks linked to climate change, it is possible to define mitigation and adaptation scenarios. They can be different, according to the typological, functional and constructive features of the analyzed system, divided into categories of intervention, and characterized by different analysis levels (from the single building to the urban area). The use of software simulations allows obtaining information on the overall behavior of the building and the urban system, to generate predictive models in the medium and long-term environmental and energy retrofit and to make a comparative study of the mitigation scenarios identified. The studied methodology is validated on a case study.

Keywords: climate impact mitigation, energy efficiency, existing building heritage, resilience

Procedia PDF Downloads 236
4183 Correlates of Cost Effectiveness Analysis of Rating Scale and Psycho-Productive Multiple Choice Test for Assessing Students' Performance in Rice Production in Secondary Schools in Ebonyi State, Nigeria

Authors: Ogbonnaya Elom, Francis N. Azunku, Ogochukwu Onah

Abstract:

This study was carried out to determine the correlates of cost effectiveness analysis of rating scale and psycho-productive multiple choice test for assessing students’ performance in rice production. Four research questions were developed and answered, while one hypothesis was formulated and tested. Survey and correlation designs were adopted. The population of the study was 20,783 made up of 20,511 senior secondary (SSII) students and 272 teachers of agricultural science from 221 public secondary schools. Two schools with one intact class of 30 students each was purposely selected as sample based on certain criteria. Four sets of instruments were used for data collection. One of the instruments-the rating scale, was subjected to face and content validation while the other three were subjected to face validation only. Cronbach alpha technique was utilized to determine the internal consistency of the rating scale items which yielded a coefficient of 0.82 while the Kudder-Richardson (K-R 20) formula was involved in determining the stability of the psycho-productive multiple choice test items which yielded a coefficient of 0.80. Method of data collection involved a step-by-step approach in collecting data. Data collected were analyzed using percentage, weighted mean and sign test to answer the research questions while the hypothesis was tested using Spearman rank-order of correlation and t-test statistic. Findings of the study revealed among others, that psycho-productive multiple choice test is more effective than rating scale when the former is applied on the two groups of students. It was recommended among others, that the external examination bodies should integrate the use of psycho- productive multiple choice test into their examination policy and direct secondary schools to comply with it.

Keywords: correlates, cost-effectiveness, psycho-productive multiple-choice scale, rating scale

Procedia PDF Downloads 135
4182 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 68
4181 The Role of Electronic Banking Technology in the Modernization of Algerian Banking System

Authors: Azzi Mohammed Amin

Abstract:

In the last decade Algeria has investigated in a scale of economic reforms including different areas, among these; reforms in the banking system. This was mainly through the implementation of some regulations that facilitate the shift to market economy and guarantee integration into global economy. The most important new ideas that have emerged in this area are perhaps to find a possibility of integrating the so called e-banking. Based on what has already been stated, we will try in this study to highlight the significant role of electronic banking services as novel trends in the modernization and development of Algerian banks.

Keywords: banking technology, Internet banks, modernization of banks, virtual banks

Procedia PDF Downloads 434
4180 Advancing Aviation: A Multidisciplinary Approach to Innovation, Management, and Technology Integration in the 21st Century

Authors: Fatih Frank Alparslan

Abstract:

The aviation industry is at a crucial turning point due to modern technologies, environmental concerns, and changing ways of transporting people and goods globally. The paper examines these challenges and opportunities comprehensively. It emphasizes the role of innovative management and advanced technology in shaping the future of air travel. This study begins with an overview of the current state of the aviation industry, identifying key areas where innovation and technology could be highly beneficial. It explores the latest advancements in airplane design, propulsion, and materials. These technological advancements are shown to enhance aircraft performance and environmental sustainability. The paper also discusses the use of artificial intelligence and machine learning in improving air traffic control, enhancing safety, and making flight operations more efficient. The management of these technologies is critically important. Therefore, the research delves into necessary changes in organization, culture, and operations to support innovation. It proposes a management approach that aligns with these modern technologies, underlining the importance of forward-thinking leaders who collaborate across disciplines and embrace innovative ideas. The paper addresses challenges in adopting these innovations, such as regulatory barriers, the need for industry-wide standards, and the impact of technological changes on jobs and society. It recommends that governments, aviation businesses, and educational institutions collaborate to address these challenges effectively, paving the way for a more innovative and eco-friendly aviation industry. In conclusion, the paper argues that the future of aviation relies on integrating new management practices with innovative technologies. It urges a collective effort to push beyond current capabilities, envisioning an aviation industry that is safer, more efficient, and environmentally responsible. By adopting a broad approach, this research contributes to the ongoing discussion about resolving the complex issues facing today's aviation sector, offering insights and guidance to prepare for future advancements.

Keywords: aviation innovation, technology integration, environmental sustainability, management strategies, multidisciplinary approach

Procedia PDF Downloads 46
4179 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 118
4178 'Innovations among People' in Selected Social Economy Enterprises in Poland

Authors: Hanna Kroczak

Abstract:

In Poland, the system of social and professional reintegration of people at risk of social exclusion is, in fact, based on the activity of social economy enterprises. Playing this significant role these entities have to cope with various problems, related to the necessity of being successful on the open market, location on the peripheral (especially rural) areas or the “socialist heritage” in social and economic relations, which is certainly not favorable for implementing the idea of activation policy. One of the main objectives of the project entitled “Innovation among people. The analysis of the innovations creation and implementation in companies and social economy enterprises operating in Poland”, was to investigate the innovativeness of Polish social economy entities as a possible way for them to be prosperous (the project was funded by the Polish National Science Centre grant on the decision DEC-2013/11/B/HS4/00691). The ethnographic research in this matter was conducted in 2015 in two parts: six three-day studies using participant observation and individual in-depth interview (IDI) techniques (in three social cooperatives and three social integration centres) and two one-month shadowings (in one social cooperative and one social integration centre). Enterprises were selected from various provinces in Poland on the basis of data from previous computer-assisted telephone interviewing (CATI) research, where they declared that innovation management is a central element of their strategy. The ethnographic study revealed that they, indeed, create innovations and the main types of them are social and organisational innovations – but not always and not all the employees are aware of that. Moreover, it turned out that wherever the research was conducted, researchers found some similar opportunities of innovations creating process, like a “charismatic leader”, true passion and commitment not depended on the earned money or building local institutional networks, and similar threats, e.g. under-staffed offices or the great bureaucracy of some institutions. The primary conclusion for the studied entities is that being innovative is not only their challenge and opportunity for well-being at the same time, but even a necessity, something deeply rooted in their specific organisational structures. Explanations and illustrations for the statements above will be presented in the proposed paper.

Keywords: ethnographic research, innovation, Polish social economy, professional reintegration, social economy enterprises, social reintegration

Procedia PDF Downloads 205
4177 Multi-omics Integrative Analysis with Genome-Scale Metabolic Model Simulation Reveals Reaction Essentiality data in Human Astrocytes Under the Lipotoxic Effect of Palmitic Acid

Authors: Janneth Gonzalez, Andres Pinzon Velasco, Maria Angarita, Nicolas Mendoza

Abstract:

Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatory pathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, there are few studies on the neuro-protective mechanisms of tibolone, especially at the systemic (omic) level. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also use control theory to identify those reactions that control the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a change in energy source use through inhibition of folate cycle and fatty acid β-oxidation and upregulation of ketone bodies formation.We found 25 metabolic switches under PA-mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation in metabolic pathways that increase neurotoxicity and represent potential treatment targets. Finally, this study framework facilitates the understanding of metabolic regulation strategies, andit can be used for in silico exploring the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.

Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics, control theory

Procedia PDF Downloads 115
4176 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 127
4175 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation

Authors: A. Raj Kumar, S. Bilaloglu

Abstract:

Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.

Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile

Procedia PDF Downloads 238
4174 An Inquiry on Imaging of Soft Tissues in Micro-Computed Tomography

Authors: Matej Patzelt, Jana Mrzilkova, Jan Dudak, Frantisek Krejci, Jan Zemlicka, Zdenek Wurst, Petr Zach, Vladimir Musil

Abstract:

Introduction: Micro-CT is well used for examination of bone structures and teeth. On the other hand visualization of the soft tissues is still limited. The goal of our study was to elaborate methodology for soft tissue samples imaging in micro-CT. Methodology: We used organs of rats and mice. We either did a preparation of the organs and fixation in contrast solution or we did cannulation of blood vessels and their injection for imaging of the vascular system. First, we scanned native specimens, then we created corrosive specimens by resins. In the next step, we injected vascular system either by Aurovist contrast agent or by Exitron. In the next step, we focused on soft tissues contrast increase. We scanned samples fixated in Lugol solution, samples fixated in pure ethanol and in formaldehyde solution. All used methods were afterwards compared. Results: Native specimens did not provide sufficient contrast of the tissues in any of organs. Corrosive samples of the blood stream provided great contrast and details; on the other hand, it was necessary to destroy the organ. Further examined possibility was injection of the AuroVist contrast that leads to the great bloodstream contrast. Injection of Exitron contrast agent comparing to Aurovist did not provide such a great contrast. The soft tissues (kidney, heart, lungs, brain, and liver) were best visualized after fixation in ethanol. This type of fixation showed best results in all studied tissues. Lugol solution had great results in muscle tissue. Fixation by formaldehyde solution showed similar quality of contrast in the tissues like ethanol. Conclusion: Before imaging, we need to, first, determinate which structures of the soft tissues we want to visualize. In the case of the bloodstream, the best was AuroVist and corrosive specimens. Muscle tissue is best visualized by Lugol solution. In the case of the organs containing cavities, like kidneys or brain, the best way was ethanol fixation.

Keywords: experimental imaging, fixation, micro-CT, soft tissues

Procedia PDF Downloads 320
4173 Clinch Process Simulation Using Diffuse Elements

Authors: Benzegaou Ali, Brani Benabderrahmane

Abstract:

This work describes a numerical study of the TOX–clinching process using diffuse elements. A computer code baptized SEMA "Static Explicit Method Analysis" is developed to simulate the clinch joining process. The FE code is based on an Updated Lagrangian scheme. The used resolution method is based on an explicit static approach. The integration of the elasto-plastic behavior law is realized with an algorithm of Simo and Taylor. The tools are represented by plane facets.

Keywords: diffuse elements, numerical simulation, clinching, contact, large deformation

Procedia PDF Downloads 357
4172 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation

Authors: Matthias Leitner, Gernot Pottlacher

Abstract:

Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.

Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion

Procedia PDF Downloads 216
4171 Shifting Paradigms for Micro, Small, and Medium Enterprises in the Global Construction Market: The Crucial Roles of Technology and Sustainability

Authors: Sohrab Donyavi

Abstract:

The global construction market is experiencing significant shifts, particularly for micro, small, and medium enterprises (MSMEs), driven by the dual imperatives of technological advancement and sustainability. MSMEs play a crucial role in the construction industry, often being the backbone of economic development and fostering entrepreneurial skills. However, their dominance has also led to industry fragmentation and challenges such as technological lag and declining profit margins, which threaten their global competitiveness. This paper explores the integration of technology and sustainability in reshaping the paradigms for MSMEs in the construction sector. The adoption of advanced technologies, such as building information modeling (BIM) and AI, are pivotal for promoting sustainable construction practices. These tools enable MSMEs to design and construct environmentally responsible buildings, thereby contributing to the industry's sustainability goals. The research highlights that achieving sustainability in construction involves significant efforts in conservation, recycling, and the development of new materials and technologies. This approach aligns with the broader goal of integrating economic, environmental, and social aims into firm objectives to create long-term value while ensuring the protection of natural resources for future generations. Critical factors for implementing sustainable oriented innovation (SOI) practices in MSMEs include top management support, government initiatives, and financial resources. These factors are essential for fostering an environment conducive to innovation and sustainability. Furthermore, the empowerment of MSMEs through improved governance, market-oriented programs, sustainable productivity growth, and access to financing is vital. In developing regions like Indonesia, these strategies are crucial for enabling MSMEs to thrive in the face of globalization. The tendency of large firms to grow larger with the help of technology and globalization has led to the emergence of a high-technology oligopoly, posing a significant challenge to traditional construction practices. This shift necessitates that MSMEs adapt by leveraging technology and embracing sustainable practices to remain competitive. The research underscores the importance of integrating technology and sustainability not only as a competitive strategy but also as a means to contribute to the global effort of environmental conservation and sustainable development. This paper concludes that the successful integration of technology and sustainability in MSMEs requires a multifaceted approach. It involves the adoption of advanced technological tools, strong support from top management, proactive government policies, and access to financial resources. By addressing these factors, MSMEs can overcome the challenges of industry fragmentation, technological lag, and declining profit margins. Ultimately, this integration will enable MSMEs to play a pivotal role in driving the construction industry towards a more sustainable and technologically advanced future. The findings and recommendations are based on a comprehensive case study utilizing semi-structured interviews, observations, questionnaires, and document reviews.

Keywords: MSMEs, construction, technology, sustainability, innovation

Procedia PDF Downloads 30
4170 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems

Authors: Zahid Ullah, Atlas Khan

Abstract:

This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.

Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms

Procedia PDF Downloads 109