Search results for: regression models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9040

Search results for: regression models

1960 Development and Evaluation of Naringenin Nanosuspension to Improve Antioxidant Potential

Authors: Md. Shadab, Mariyam N. Nashid, Venkata Srikanth Meka, Thiagarajan Madheswaran

Abstract:

Naringenin (NAR), is a naturally occurring plant flavonoid, found predominantly in citrus fruits, that possesses a wide range of pharmacological properties including anti-oxidant, anti-inflammatory behaviour, cholesterol-lowering and anticarcinogenic activities. However, despite the therapeutic potential of naringenin shown in a number of animal models, its clinical development has been hindered due to its low aqueous solubility, slow dissolution rate and inefficient transport across biological membranes resulting in low bioavailability. Naringenin nanosuspension were produced using stabilizers Tween® 80 by high pressure homogenization techniques. The nanosuspensions were characterized with regard to size (photon correlation spectroscopy (PCS), size distribution, charge (zeta potential measurements), morphology, short term physical stability, dissolution profile and antioxidant potential. A nanocrystal PCS size of about 500 nm was obtained after 20 homogenization cycles at 1500 bar. The short-term stability was assessed by storage of the nanosuspensions at 4 ◦C, room temperature and 40 ◦C. Result showed that naringenin nanosuspension was physically unstable due to large fluctuations in the particle size and zeta potential after 30 days. Naringenin nanosuspension demonstrated higher drug dissolution (97.90%) compared to naringenin powder (62.76%) after 120 minutes of testing. Naringenin nanosuspension showed increased antioxidant activity compared to naringenin powder with a percentage DPPH radical scavenging activity of 49.17% and 31.45% respectively at the lowest DPPH concentration.

Keywords: bioavailability, naringenin, nanosuspension, oral delivery

Procedia PDF Downloads 319
1959 Evaluation of the Weight-Based and Fat-Based Indices in Relation to Basal Metabolic Rate-to-Weight Ratio

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate is questioned as a risk factor for weight gain. The relations between basal metabolic rate and body composition have not been cleared yet. The impact of fat mass on basal metabolic rate is also uncertain. Within this context, indices based upon total body mass as well as total body fat mass are available. In this study, the aim is to investigate the potential clinical utility of these indices in the adult population. 287 individuals, aged from 18 to 79 years, were included into the scope of the study. Based upon body mass index values, 10 underweight, 88 normal, 88 overweight, 81 obese, and 20 morbid obese individuals participated. Anthropometric measurements including height (m), and weight (kg) were performed. Body mass index, diagnostic obesity notation model assessment index I, diagnostic obesity notation model assessment index II, basal metabolic rate-to-weight ratio were calculated. Total body fat mass (kg), fat percent (%), basal metabolic rate, metabolic age, visceral adiposity, fat mass of upper as well as lower extremities and trunk, obesity degree were measured by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical evaluations were performed by statistical package (SPSS) for Windows Version 16.0. Scatterplots of individual measurements for the parameters concerning correlations were drawn. Linear regression lines were displayed. The statistical significance degree was accepted as p < 0.05. The strong correlations between body mass index and diagnostic obesity notation model assessment index I as well as diagnostic obesity notation model assessment index II were obtained (p < 0.001). A much stronger correlation was detected between basal metabolic rate and diagnostic obesity notation model assessment index I in comparison with that calculated for basal metabolic rate and body mass index (p < 0.001). Upon consideration of the associations between basal metabolic rate-to-weight ratio and these three indices, the best association was observed between basal metabolic rate-to-weight and diagnostic obesity notation model assessment index II. In a similar manner, this index was highly correlated with fat percent (p < 0.001). Being independent of the indices, a strong correlation was found between fat percent and basal metabolic rate-to-weight ratio (p < 0.001). Visceral adiposity was much strongly correlated with metabolic age when compared to that with chronological age (p < 0.001). In conclusion, all three indices were associated with metabolic age, but not with chronological age. Diagnostic obesity notation model assessment index II values were highly correlated with body mass index values throughout all ranges starting with underweight going towards morbid obesity. This index is the best in terms of its association with basal metabolic rate-to-weight ratio, which can be interpreted as basal metabolic rate unit.

Keywords: basal metabolic rate, body mass index, children, diagnostic obesity notation model assessment index, obesity

Procedia PDF Downloads 139
1958 Modeling the Human Harbor: An Equity Project in New York City, New York USA

Authors: Lauren B. Birney

Abstract:

The envisioned long-term outcome of this three-year research, and implementation plan is for 1) teachers and students to design and build their own computational models of real-world environmental-human health phenomena occurring within the context of the “Human Harbor” and 2) project researchers to evaluate the degree to which these integrated Computer Science (CS) education experiences in New York City (NYC) public school classrooms (PreK-12) impact students’ computational-technical skill development, job readiness, career motivations, and measurable abilities to understand, articulate, and solve the underlying phenomena at the center of their models. This effort builds on the partnership’s successes over the past eight years in developing a benchmark Model of restoration-based Science, Technology, Engineering, and Math (STEM) education for urban public schools and achieving relatively broad-based implementation in the nation’s largest public school system. The Billion Oyster Project Curriculum and Community Enterprise for Restoration Science (BOP-CCERS STEM + Computing) curriculum, teacher professional developments, and community engagement programs have reached more than 200 educators and 11,000 students at 124 schools, with 84 waterfront locations and Out of School of Time (OST) programs. The BOP-CCERS Partnership is poised to develop a more refined focus on integrating computer science across the STEM domains; teaching industry-aligned computational methods and tools; and explicitly preparing students from the city’s most under-resourced and underrepresented communities for upwardly mobile careers in NYC’s ever-expanding “digital economy,” in which jobs require computational thinking and an increasing percentage require discreet computer science technical skills. Project Objectives include the following: 1. Computational Thinking (CT) Integration: Integrate computational thinking core practices across existing middle/high school BOP-CCERS STEM curriculum as a means of scaffolding toward long term computer science and computational modeling outcomes. 2. Data Science and Data Analytics: Enabling Researchers to perform interviews with Teachers, students, community members, partners, stakeholders, and Science, Technology, Engineering, and Mathematics (STEM) industry Professionals. Collaborative analysis and data collection were also performed. As a centerpiece, the BOP-CCERS partnership will expand to include a dedicated computer science education partner. New York City Department of Education (NYCDOE), Computer Science for All (CS4ALL) NYC will serve as the dedicated Computer Science (CS) lead, advising the consortium on integration and curriculum development, working in tandem. The BOP-CCERS Model™ also validates that with appropriate application of technical infrastructure, intensive teacher professional developments, and curricular scaffolding, socially connected science learning can be mainstreamed in the nation’s largest urban public school system. This is evidenced and substantiated in the initial phases of BOP-CCERS™. The BOP-CCERS™ student curriculum and teacher professional development have been implemented in approximately 24% of NYC public middle schools, reaching more than 250 educators and 11,000 students directly. BOP-CCERS™ is a fully scalable and transferable educational model, adaptable to all American school districts. In all settings of the proposed Phase IV initiative, the primary beneficiary group will be underrepresented NYC public school students who live in high-poverty neighborhoods and are traditionally underrepresented in the STEM fields, including African Americans, Latinos, English language learners, and children from economically disadvantaged households. In particular, BOP-CCERS Phase IV will explicitly prepare underrepresented students for skilled positions within New York City’s expanding digital economy, computer science, computational information systems, and innovative technology sectors.

Keywords: computer science, data science, equity, diversity and inclusion, STEM education

Procedia PDF Downloads 44
1957 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots

Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu

Abstract:

Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.

Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law

Procedia PDF Downloads 58
1956 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 86
1955 Islamic Education System: Implementation of Curriculum Kuttab Al-Fatih Semarang

Authors: Basyir Yaman, Fades Br. Gultom

Abstract:

The picture and pattern of Islamic education in the Prophet's period in Mecca and Medina is the history of the past that we need to bring back. The Basic Education Institute called Kuttab. Kuttab or Maktab comes from the word kataba which means to write. The popular Kuttab in the Prophet’s period aims to resolve the illiteracy in the Arab community. In Indonesia, this Institution has 25 branches; one of them is located in Semarang (i.e. Kuttab Al-Fatih). Kuttab Al-Fatih as a non-formal institution of Islamic education is reserved for children aged 5-12 years. The independently designed curriculum is a distinctive feature that distinguishes between Kuttab Al-Fatih curriculum and the formal institutional curriculum in Indonesia. The curriculum includes the faith and the Qur’an. Kuttab Al-Fatih has been licensed as a Community Activity Learning Center under the direct supervision and guidance of the National Education Department. Here, we focus to describe the implementation of curriculum Kuttab Al-Fatih Semarang (i.e. faith and al-Qur’an). After that, we determine the relevance between the implementation of the Kuttab Al-Fatih education system with the formal education system in Indonesia. This research uses literature review and field research qualitative methods. We obtained the data from the head of Kuttab Al-Fatih Semarang, vice curriculum, faith coordinator, al-Qur’an coordinator, as well as the guardians of learners and the learners. The result of this research is the relevance of education system in Kuttab Al-Fatih Semarang about education system in Indonesia. Kuttab Al-Fatih Semarang emphasizes character building through a curriculum designed in such a way and combines thematic learning models in modules.

Keywords: Islamic education system, implementation of curriculum, Kuttab Al-Fatih Semarang, formal education system, Indonesia

Procedia PDF Downloads 322
1954 Presenting a Model in the Analysis of Supply Chain Management Components by Using Statistical Distribution Functions

Authors: Ramin Rostamkhani, Thurasamy Ramayah

Abstract:

One of the most important topics of today’s industrial organizations is the challenging issue of supply chain management. In this field, scientists and researchers have published numerous practical articles and models, especially in the last decade. In this research, to our best knowledge, the discussion of data modeling of supply chain management components using well-known statistical distribution functions has been considered. The world of science owns mathematics, and showing the behavior of supply chain data based on the characteristics of statistical distribution functions is innovative research that has not been published anywhere until the moment of doing this research. In an analytical process, describing different aspects of functions including probability density, cumulative distribution, reliability, and failure function can reach the suitable statistical distribution function for each of the components of the supply chain management. It can be applied to predict the behavior data of the relevant component in the future. Providing a model to adapt the best statistical distribution function in the supply chain management components will be a big revolution in the field of the behavior of the supply chain management elements in today's industrial organizations. Demonstrating the final results of the proposed model by introducing the process capability indices before and after implementing it alongside verifying the approach through the relevant assessment as an acceptable verification is a final step. The introduced approach can save the required time and cost to achieve the organizational goals. Moreover, it can increase added value in the organization.

Keywords: analyzing, process capability indices, statistical distribution functions, supply chain management components

Procedia PDF Downloads 79
1953 Co-Alignment of Comfort and Energy Saving Objectives for U.S. Office Buildings and Restaurants

Authors: Lourdes Gutierrez, Eric Williams

Abstract:

Post-occupancy research shows that only 11% of commercial buildings met the ASHRAE thermal comfort standard. Many buildings are too warm in winter and/or too cool in summer, wasting energy and not providing comfort. In this paper, potential energy savings in U.S. offices and restaurants if thermostat settings are calculated according the updated ASHRAE 55-2013 comfort model that accounts for outdoor temperature and clothing choice for different climate zones. eQUEST building models are calibrated to reproduce aggregate energy consumption as reported in the U.S. Commercial Building Energy Consumption Survey. Changes in energy consumption due to the new settings are analyzed for 14 cities in different climate zones and then the results are extrapolated to estimate potential national savings. It is found that, depending on the climate zone, each degree increase in the summer saves 0.6 to 1.0% of total building electricity consumption. Each degree the winter setting is lowered saves 1.2% to 8.7% of total building natural gas consumption. With new thermostat settings, national savings are 2.5% of the total consumed in all office buildings and restaurants, summing up to national savings of 69.6 million GJ annually, comparable to all 2015 total solar PV generation in US. The goals of improved comfort and energy/economic savings are thus co-aligned, raising the importance of thermostat management as an energy efficiency strategy.

Keywords: energy savings quantifications, commercial building stocks, dynamic clothing insulation model, operation-focused interventions, energy management, thermal comfort, thermostat settings

Procedia PDF Downloads 298
1952 Sentiment Analysis of Chinese Microblog Comments: Comparison between Support Vector Machine and Long Short-Term Memory

Authors: Xu Jiaqiao

Abstract:

Text sentiment analysis is an important branch of natural language processing. This technology is widely used in public opinion analysis and web surfing recommendations. At present, the mainstream sentiment analysis methods include three parts: sentiment analysis based on a sentiment dictionary, based on traditional machine learning, and based on deep learning. This paper mainly analyzes and compares the advantages and disadvantages of the SVM method of traditional machine learning and the Long Short-term Memory (LSTM) method of deep learning in the field of Chinese sentiment analysis, using Chinese comments on Sina Microblog as the data set. Firstly, this paper classifies and adds labels to the original comment dataset obtained by the web crawler, and then uses Jieba word segmentation to classify the original dataset and remove stop words. After that, this paper extracts text feature vectors and builds document word vectors to facilitate the training of the model. Finally, SVM and LSTM models are trained respectively. After accuracy calculation, it can be obtained that the accuracy of the LSTM model is 85.80%, while the accuracy of SVM is 91.07%. But at the same time, LSTM operation only needs 2.57 seconds, SVM model needs 6.06 seconds. Therefore, this paper concludes that: compared with the SVM model, the LSTM model is worse in accuracy but faster in processing speed.

Keywords: sentiment analysis, support vector machine, long short-term memory, Chinese microblog comments

Procedia PDF Downloads 78
1951 Assessment of Water Availability and Quality in the Climate Change Context in Urban Areas

Authors: Rose-Michelle Smith, Musandji Fuamba, Salomon Salumu

Abstract:

Water is vital for life. Access to drinking water and sanitation for humans is one of the Sustainable Development Goals (specifically the sixth) approved by United Nations Member States in September 2015. There are various problems identified relating to water: insufficient fresh water, inequitable distribution of water resources, poor water management in certain places on the planet, detection of water-borne diseases due to poor water quality, and the negative impacts of climate change on water. One of the major challenges in the world is finding ways to ensure that people and the environment have enough water resources to sustain and support their existence. Thus, this research project aims to develop a tool to assess the availability, quality and needs of water in current and future situations with regard to climate change. This tool was tested using threshold values for three regions in three countries: the Metropolitan Community of Montreal (Canada), Normandie Region (France) and North Department (Haiti). The WEAP software was used to evaluate the available quantity of water resources. For water quality, two models were performed: the Canadian Council of Ministers of the Environment (CCME) and the Malaysian Water Quality Index (WQI). Preliminary results showed that the ratio of the needs could be estimated at 155, 308 and 644 m3/capita in 2023 for Normandie, Cap-Haitian and CMM, respectively. Then, the Water Quality Index (WQI) varied from one country to another. Other simulations regarding the water availability and quality are still in progress. This tool will be very useful in decision-making on projects relating to water use in the future; it will make it possible to estimate whether the available resources will be able to satisfy the needs.

Keywords: climate change, water needs, balance sheet, water quality

Procedia PDF Downloads 58
1950 Modeling Battery Degradation for Electric Buses: Assessment of Lifespan Reduction from In-Depot Charging

Authors: Anaissia Franca, Julian Fernandez, Curran Crawford, Ned Djilali

Abstract:

A methodology to estimate the state-of-charge (SOC) of battery electric buses, including degradation effects, for a given driving cycle is presented to support long-term techno-economic analysis integrating electric buses and charging infrastructure. The degradation mechanisms, characterized by both capacity and power fade with time, have been modeled using an electrochemical model for Li-ion batteries. Iterative changes in the negative electrode film resistance and decrease in available lithium as a function of utilization is simulated for every cycle. The cycles are formulated to follow typical transit bus driving patterns. The power and capacity decay resulting from the degradation model are introduced as inputs to a longitudinal chassis dynamic analysis that calculates the power consumption of the bus for a given driving cycle to find the state-of-charge of the battery as a function of time. The method is applied to an in-depot charging scenario, for which the bus is charged exclusively at the depot, overnight and to its full capacity. This scenario is run both with and without including degradation effects over time to illustrate the significant impact of degradation mechanisms on bus performance when doing feasibility studies for a fleet of electric buses. The impact of battery degradation on battery lifetime is also assessed. The modeling tool can be further used to optimize component sizing and charging locations for electric bus deployment projects.

Keywords: battery electric bus, E-bus, in-depot charging, lithium-ion battery, battery degradation, capacity fade, power fade, electric vehicle, SEI, electrochemical models

Procedia PDF Downloads 311
1949 The Impact of AI on Consumers’ Morality: An Empirical Evidence

Authors: Mingxia Zhu, Matthew Tingchi Liu

Abstract:

AI grows gradually in the market with its efficiency and accuracy, influencing people’s perceptions, attitude, and even consequential behaviors. Current study extends prior research by focusing on AI’s impact on consumers’ morality. First, study 1 tested individuals’ believes about AI and human’s moral perceptions and people’s attribution of moral worth to AI and human. Moral perception refers to a computational system an entity maintains to detect and identify moral violations, while moral worth here denotes whether individual regard an entity as worthy of moral treatment. To identify the effect of AI on consumers’ morality, two studies were employed. Study 1 is a within-subjects survey, while study 2 is an experimental study. In the study 1, one hundred and forty participants were recruited through online survey company in China (M_age = 27.31 years, SD = 7.12 years; 65% female). The participants were asked to assign moral perception and moral worth to AI and human. A paired samples t-test reveals that people generally regard that human has higher moral perception (M_Human = 6.03, SD = .86) than AI (M_AI = 2.79, SD = 1.19; t(139) = 27.07, p < .001; Cohen’s d = 1.41). In addition, another paired samples t-test results showed that people attributed higher moral worth to the human personnel (M_Human = 6.39, SD = .56) compared with AIs (M_AI = 5.43, SD = .85; t(139) = 12.96, p < .001; d = .88). In the next study, two hundred valid samples were recruited from survey company in China (M_age = 27.87 years, SD = 6.68 years; 55% female) and the participants were randomly assigned to two conditions (AI vs. human). After viewing the stimuli of human versus AI, participants are informed that one insurance company would determine the price purely based on their declaration. Therefore, their open-ended answers were coded into ethical, honest behavior and unethical, dishonest behavior according to the design of prior literature. A Chi-square analysis revealed that 64% of the participants would immorally lie towards AI insurance inspector while 42% of participants reported deliberately lower mileage facing with human inspector (χ^2 (1) = 9.71, p = .002). Similarly, the logistic regression results suggested that people would significantly more likely to report fraudulent answer when facing with AI (β = .89, odds ratio = 2.45, Wald = 9.56, p = .002). It is demonstrated that people would be more likely to behave unethically in front of non-human agents, such as AI agent, rather than human. The research findings shed light on new practical ethical issues in human-AI interaction and address the important role of human employees during the process of service delivery in the new era of AI.

Keywords: AI agent, consumer morality, ethical behavior, human-AI interaction

Procedia PDF Downloads 67
1948 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image

Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche

Abstract:

The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.

Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter

Procedia PDF Downloads 150
1947 Exploration of Correlation between Design Principles and Elements with the Visual Aesthetic in Residential Interiors

Authors: Ikra Khan, Reenu Singh

Abstract:

Composition is essential when designing the interiors of residential spaces. The ability to adopt a unique style of using design principles and design elements is another. This research report explores how the visual aesthetic within a space is achieved through the use of design principles and design elements while maintaining a signature style. It also observes the relationship between design styles and compositions that are achieved as a result of the implementation of the principles. Information collected from books and the internet helped to understand how a composition can be achieved in residential interiors by resorting to design principles and design elements as tools for achieving an aesthetic composition. It also helped determine the results of authentic representation of design ideas and how they make one’s work exceptional. A questionnaire survey was also conducted to understand the impact of a visually aesthetic residential interior of a signature style on the lifestyle of individuals residing in them. The findings denote a pattern in the application of design principles and design elements. Individual principles and elements or a combination of the same are used to achieve an aesthetically pleasing composition. This was supported by creating CAD illustrations of two different residential projects with varying approaches and design styles. These illustrations include mood boards, 3D models, and sectional elevations as rendered views to understand the concept design and its translation via these mediums. A direct relation is observed between the application of design principles and design elements to achieve visually aesthetic residential interiors that suit an individual’s taste. These practices can be applied when designing bespoke commercial as well as industrial interiors that are suited to specific aesthetic and functional needs.

Keywords: composition, design principles, elements, interiors, residential spaces

Procedia PDF Downloads 90
1946 Dem Based Surface Deformation in Jhelum Valley: Insights from River Profile Analysis

Authors: Syed Amer Mahmood, Rao Mansor Ali Khan

Abstract:

This study deals with the remote sensing analysis of tectonic deformation and its implications to understand the regional uplift conditions in the lower Jhelum and eastern Potwar. Identification and mapping of active structures is an important issue in order to assess seismic hazards and to understand the Quaternary deformation of the region. Digital elevation models (DEMs) provide an opportunity to quantify land surface geometry in terms of elevation and its derivatives. Tectonic movement along the faults is often reflected by characteristic geomorphological features such as elevation, stream offsets, slope breaks and the contributing drainage area. The river profile analysis in this region using SRTM digital elevation model gives information about the tectonic influence on the local drainage network. The steepness and concavity indices have been calculated by power law of scaling relations under steady state conditions. An uplift rate map is prepared after carefully analysing the local drainage network showing uplift rates in mm/year. The active faults in the region control local drainages and the deflection of stream channels is a further evidence of the recent fault activity. The results show variable relative uplift conditions along MBT and Riasi and represent a wonderful example of the recency of uplift, as well as the influence of active tectonics on the evolution of young orogens.

Keywords: quaternary deformation, SRTM DEM, geomorphometric indices, active tectonics and MBT

Procedia PDF Downloads 340
1945 A Comparative Study on Behavior Among Different Types of Shear Connectors using Finite Element Analysis

Authors: Mohd Tahseen Islam Talukder, Sheikh Adnan Enam, Latifa Akter Lithi, Soebur Rahman

Abstract:

Composite structures have made significant advances in construction applications during the last few decades. Composite structures are composed of structural steel shapes and reinforced concrete combined with shear connectors, which benefit each material's unique properties. Significant research has been conducted on different types of connectors’ behavior and shear capacity. Moreover, the AISC 360-16 “Specification for Steel Structural Buildings” consists of a formula for channel shear connectors' shear capacity. This research compares the behavior of C type and L type shear connectors using Finite Element Analysis. Experimental results from published literature are used to validate the finite element models. The 3-D Finite Element Model (FEM) was built using ABAQUS 2017 to investigate non-linear capabilities and the ultimate load-carrying potential of the connectors using push-out tests. The changes in connector dimensions were analyzed using this non-linear model in parametric investigations. The parametric study shows that by increasing the length of the shear connector by 10 mm, its shear strength increases by 21%. Shear capacity increased by 13% as the height was increased by 10 mm. The thickness of the specimen was raised by 1 mm, resulting in a 2% increase in shear capacity. However, the shear capacity of channel connectors was reduced by 21% due to an increase of thickness by 2 mm.

Keywords: finite element method, channel shear connector, angle shear connector, ABAQUS, composite structure, shear connector, parametric study, ultimate shear capacity, push-out test

Procedia PDF Downloads 106
1944 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder

Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu

Abstract:

Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.

Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network

Procedia PDF Downloads 130
1943 Comparison of the Factor of Safety and Strength Reduction Factor Values from Slope Stability Analysis of a Large Open Pit

Authors: James Killian, Sarah Cox

Abstract:

The use of stability criteria within geotechnical engineering is the way the results of analyses are conveyed, and sensitivities and risk assessments are performed. Historically, the primary stability criteria for slope design has been the Factor of Safety (FOS) coming from a limit calculation. Increasingly, the value derived from Strength Reduction Factor (SRF) analysis is being used as the criteria for stability analysis. The purpose of this work was to study in detail the relationship between SRF values produced from a numerical modeling technique and the traditional FOS values produced from Limit Equilibrium (LEM) analyses. This study utilized a model of a 3000-foot-high slope with a 45-degree slope angle, assuming a perfectly plastic mohr-coulomb constitutive model with high cohesion and friction angle values typical of a large hard rock mine slope. A number of variables affecting the values of the SRF in a numerical analysis were tested, including zone size, in-situ stress, tensile strength, and dilation angle. This paper demonstrates that in most cases, SRF values are lower than the corresponding LEM FOS values. Modeled zone size has the greatest effect on the estimated SRF value, which can vary as much as 15% to the downside compared to FOS. For consistency when using SRF as a stability criteria, the authors suggest that numerical model zone sizes should not be constructed to be smaller than about 1% of the overall problem slope height and shouldn’t be greater than 2%. Future work could include investigations of the effect of anisotropic strength assumptions or advanced constitutive models.

Keywords: FOS, SRF, LEM, comparison

Procedia PDF Downloads 285
1942 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph

Authors: Zhifei Hu, Feng Xia

Abstract:

In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.

Keywords: graph attention network, knowledge graph, recommendation, information propagation

Procedia PDF Downloads 101
1941 The Psychometric Properties of the Team Climate Inventory Scale: A Validation Study in Jordan’s Collectivist Society

Authors: Suhair Mereish

Abstract:

This research is aimed at examining the climate for innovation in organisations with the aim of validating the psychometric properties of the Team Climate Inventory (TCI -14) for Jordan’s collectivist society. The innovativeness of teams may be improved or obstructed by the climate within the team. Further, personal factors are considered an important element that influences the climate for innovation. Accordingly, measuring the employees' personality traits using the Big Five Inventory (BFI-44) could provide insights that aid in understanding how to improve innovation. Thus, studying the climate for innovation and its associations with personality traits is valuable, considering the insights it could offer on employee performance, job satisfaction, and well-being. Essentially, the Team Climate Inventory instrument has never been tested in Jordan’s collectivist society. Accordingly, in order to address the existing gap in the literature as a whole and, more specifically, in Jordan, it is essential to investigate its factorial structure and reliability in this particular context. It is also important to explore whether the factorial structure of the Team Climate Inventory in Jordan’s collectivist society demonstrates a similar or different structure to what has been found in individualistic ones. Lastly, examining if there are associations between the Team Climate Inventory and personality traits of Jordanian employees is pivotal. The quantitative study was carried out among Jordanian employees employed in two of the top 20 companies in Jordan, a shipping and logistics company (N=473) and a telecommunications company (N=219). To generalise the findings, this was followed by collecting data from the general population of this country (N=399). The participants completed the Team Climate Inventory. Confirmatory factor analyses and reliability tests were conducted to confirm the factorial structure, validity, and reliability of the inventory. Findings presented that the four-factor structure of the Team Climate Inventory in Jordan revealed a similar structure to the ones in Western culture. The four-factor structure has been confirmed with good fit indices and reliability values. Moreover, for climate for innovation, regression analysis identified agreeableness (positive) and neuroticism (negative) from the Big Five Inventory as significant predictors. This study will contribute to knowledge in several ways. First, by examining the reliability and factorial structure in a Jordanian collectivist context rather than a Western individualistic one. Second, by comparing the Team Climate Inventory structure in Jordan with findings for the Team Climate Inventory from Western individualistic societies. Third, by studying its relationships with personality traits in that country. Furthermore, findings from this study will assist practitioners in the field of organisational psychology and development to improve the climate for innovation for employees working in organisations in Jordan. It is also expected that the results of this research will provide recommendations to professionals in the business psychology sector regarding the characteristics of employees who hold positive and negative perceptions of the workplace climate.

Keywords: big five inventory, climate for innovation, collectivism, individualism, Jordan, team climate inventory

Procedia PDF Downloads 48
1940 The Influence of Strengthening on the Fundamental Frequency and Stiffness of a Confined Masonry Wall with an Opening for а Window

Authors: Emin Z. Mahmud

Abstract:

Shaking table tests are planned in order to deepen the understanding of the behavior of confined masonry structures with or without openings. The tests are realized in the laboratory of the Institute of Earthquake Engineering and Engineering Seismology (IZIIS) – Skopje. The specimens were examined separately on the shaking table, with uniaxial, in-plane excitation. After testing, samples were strengthened with GFRP (Glass Fiber Reinforced Plastic) and re-tested. This paper presents the observations from a series of shaking-table tests done on a 1:1 scaled confined masonry wall model, with opening for a window – specimens CMWuS (before strengthening) and CMWS (after strengthening). Frequency and stiffness changes before and after GFRP wall strengthening are analyzed. Definition of dynamic properties of the models was the first step of the experimental testing, which enabled acquiring important information about the achieved stiffness (natural frequencies) of the model. The natural frequency was defined in the Y direction of the model by applying resonant frequency search tests. It is important to mention that both specimens CMWuS and CMWS are subjected to the same effects. The initial frequency of the undamaged model CMWuS is 18.79 Hz, while at the end of the testing, the frequency decreased to 12.96 Hz. This emphasizes the reduction of the initial stiffness of the model due to damage, especially in the masonry and tie-beam to tie-column connection. After strengthening the damaged wall, the natural frequency increases to 14.67 Hz. This highlights the beneficial effect of strengthening. After completion of dynamic testing at CMWS, the natural frequency is reduced to 10.75 Hz.

Keywords: behaviour of masonry structures, Eurocode, frequency, masonry, shaking table test, strengthening

Procedia PDF Downloads 109
1939 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor

Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira

Abstract:

The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.

Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis

Procedia PDF Downloads 51
1938 Proposing an Improved Managerial-Based Business Process Framework

Authors: Alireza Nikravanshallmani, Jamshid Dehmeshki, Mojtaba Ahmadi

Abstract:

Modeling of business processes, based on BPMN (Business Process Modeling Notation), helps analysts and managers to understand business processes, and, identify their shortages. These models provide a context to make rational decision of organizing business processes activities in an understandable manner. The purpose of this paper is to provide a framework for better understanding of business processes and their problems by reducing the cognitive load of displayed information for their audience at different managerial levels while keeping the essential information which are needed by them. For this reason, we integrate business process diagrams across the different managerial levels to develop a framework to improve the performance of business process management (BPM) projects. The proposed framework is entitled ‘Business process improvement framework based on managerial levels (BPIML)’. This framework, determine a certain type of business process diagrams (BPD) based on BPMN with respect to the objectives and tasks of the various managerial levels of organizations and their roles in BPM projects. This framework will make us able to provide the necessary support for making decisions about business processes. The framework is evaluated with a case study in a real business process improvement project, to demonstrate its superiority over the conventional method. A questionnaire consisted of 10 questions using Likert scale was designed and given to the participants (managers of Bank Refah Kargaran three managerial levels). By examining the results of the questionnaire, it can be said that the proposed framework provide support for correct and timely decisions by increasing the clarity and transparency of the business processes which led to success in BPM projects.

Keywords: business process management (BPM), business process modeling, business process reengineering (BPR), business process optimizing, BPMN

Procedia PDF Downloads 440
1937 Application Reliability Method for Concrete Dams

Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar

Abstract:

Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit-state, monte-carlo, reliability, probability, simulation, sliding, taylor

Procedia PDF Downloads 313
1936 Blockchain: Institutional and Technological Disruptions in the Public Sector

Authors: Maria Florencia Ferrer, Saulo Fabiano Amancio-Vieira

Abstract:

The use of the blockchain in the public sector is present today and no longer the future of disruptive institutional and technological models. There are still some cultural barriers and resistance to the proper use of its potential. This research aims to present the strengths and weaknesses of using a public-permitted and distributed network in the context of the public sector. Therefore, bibliographical/documentary research was conducted to raise the main aspects of the studied platform, focused on the use of the main demands of the public sector. The platform analyzed was LACChain, which is a global alliance composed of different actors in the blockchain environment, led by the Innovation Laboratory of the Inter-American Development Bank Group (IDB Lab) for the development of the blockchain ecosystem in Latin America and the Caribbean. LACChain provides blockchain infrastructure, which is a distributed ratio technology (DLT). The platform focuses on two main pillars: community and infrastructure. It is organized as a consortium for the management and administration of an infrastructure classified as public, following the ISO typologies (ISO / TC 307). It is, therefore, a network open to any participant who agrees with the established rules, which are limited to being identified and complying with the regulations. As benefits can be listed: public network (open to all), decentralized, low transaction cost, greater publicity of transactions, reduction of corruption in contracts / public acts, in addition to improving transparency for the population in general. It is also noteworthy that the platform is not based on cryptocurrency and is not anonymous; that is, it is possible to be regulated. It is concluded that the use of record platforms, such as LACChain, can contribute to greater security on the part of the public agent in the migration process of their informational applications.

Keywords: blockchain, LACChain, public sector, technological disruptions

Procedia PDF Downloads 161
1935 Bioactivities and Phytochemical Studies of Petroleum Ether Extract of Pleiogynium timorense Bark

Authors: Gehan F. Abdel Raoof, Ataa A. Said, Khaled Y. Mohamed, Hala M. Mohammed

Abstract:

Pleiogynium timorense(DC.) Leenh is one of the therapeutically active plants belonging to the family Anacardiaceae. The bark of Pleiogynium timorense needs further studies to investigate its phytochemical and biological activities. This work was carried out to investigate the chemical composition of petroleum ether extract of Pleiogynium timorense bark as well as to evaluate the analgesic and anti-inflammatory activities. The unsaponifiable matter and fatty acid methyl esters were analyzed by Gas chromatography–mass spectrometry (GC-MS). Moreover, analgesic and anti-inflammatory activities were evaluated using acetic acid-induced writhing test and carrageen hind paw oedema models in rats, respectively. The results showed that twenty one compounds in the unsaponifiable fraction were identified representing 92.54 % of the total beak area, the major compounds were 1-Heptene (35.32%), Butylated hydroxy toluene (19.42%) and phytol (12.53%), whereas fifteen compounds were identified in the fatty acid methyl esters fraction representing 94.15% of the total identified peak area. The major compounds were 9-Octadecenoic acid methyl ester (35.34%) and 9,12-Octadecadienoic acid methyl ester (29.32%). Moreover, petroleum ether extract showed a significant reduction in pain and inflammation in a dose dependent manner. This study aims to be the first step toward the use of petroleum ether extract of Pleiogynium timorense bark as analgesic and anti-inflammatory drug.

Keywords: analgesic, anti-inflammatory, bark, petroleum ether extract, Pleiogynium timorense

Procedia PDF Downloads 157
1934 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple

Authors: Hasan Basaran, Emre Unal

Abstract:

Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.

Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode

Procedia PDF Downloads 88
1933 Maternal Health Care Mirage: A Study of Maternal Health Care Utilization for Young Married Muslim Women in India

Authors: Saradiya Mukherjee

Abstract:

Background: Indian Muslims, compared to their counterparts in other religions, generally do not fare well on many yardsticks related to socio-economic progress and the same is true with maternal health care utilization. Due to low age at marriage a major percentage of child birth is ascribed to young (15-24 years) Muslim mothers in, which pose serious concerns on the maternal health care of Young Married Muslim women (YMMW). A thorough search of past literature on Muslim women’s health and health care reveals that studies in India have mainly focused on religious differences in fertility levels and contraceptive use while the research on the determinants of maternal health care utilization among Muslim women are lacking in India. Data and Methods: Retrieving data from the National Family Health Survey -3 (2005-06) this study attempts to assess the level of utilization and factors effecting three key maternal health indicators (full ANC, safe delivery and PNC) among YMMW (15-24 years) in India. The key socio-economic and demographic variables taken as independent or predictor variables in the study was guided by existing literature particularly for India. Bi-variate analysis and chi square test was applied and variables which were found to be significant were further included in binary logistic regression. Results: The findings of the study reveal abysmally low levels of utilization for all three indicators i.e. full ANC, safe delivery and PNC of maternal health care included in the study. Mother’s education, mass media exposure, women’s autonomy, birth order, economic status wanted status of child and region of residence were found to be significant variables effecting maternal health care utilization among YMMW. Multivariate analysis reveals that no mass media exposure, lower autonomy, education, poor economic background, higher birth order and unintended pregnancy are some of the reasons behind low maternal health care utilization. Conclusion: Considering the low level of safe maternal health care utilization and its proximate determinants among YMMW the study suggests educating Muslim girls, promoting family planning use, involving media and collaboration between religious leader and health care system could be some important policy level interventions to address the unmet need of maternity services among YMMW.

Keywords: young Muslim women, religion, socio-economic condition, antenatal care, delivery, post natal care

Procedia PDF Downloads 323
1932 EECS: Reimagining the Future of Technology Education through Electrical Engineering and Computer Science Integration

Authors: Yousef Sharrab, Dimah Al-Fraihat, Monther Tarawneh, Aysh Alhroob, Ala’ Khalifeh, Nabil Sarhan

Abstract:

This paper explores the evolution of Electrical Engineering (EE) and Computer Science (CS) education in higher learning, examining the feasibility of unifying them into Electrical Engineering and Computer Science (EECS) for the technology industry. It delves into the historical reasons for their separation and underscores the need for integration. Emerging technologies such as AI, Virtual Reality, IoT, Cloud Computing, and Cybersecurity demand an integrated EE and CS program to enhance students' understanding. The study evaluates curriculum integration models, drawing from prior research and case studies, demonstrating how integration can provide students with a comprehensive knowledge base for industry demands. Successful integration necessitates addressing administrative and pedagogical challenges. For academic institutions considering merging EE and CS programs, the paper offers guidance, advocating for a flexible curriculum encompassing foundational courses and specialized tracks in computer engineering, software engineering, bioinformatics, information systems, data science, AI, robotics, IoT, virtual reality, cybersecurity, and cloud computing. Elective courses are emphasized to keep pace with technological advancements. Implementing this integrated approach can prepare students for success in the technology industry, addressing the challenges of a technologically advanced society reliant on both EE and CS principles. Integrating EE and CS curricula is crucial for preparing students for the future.

Keywords: electrical engineering, computer science, EECS, curriculum integration of EE and CS

Procedia PDF Downloads 47
1931 Research on the Efficiency and Driving Elements of Manufacturing Transformation and Upgrading in the Context of Digitization

Authors: Chen Zhang; Qiang Wang

Abstract:

With the rapid development of the new generation of digital technology, various industries have created more and more value by using digital technology, accelerating the digital transformation of various industries. The economic form of human society has evolved with the progress of technology, and in this context, the power conversion, transformation and upgrading of the manufacturing industry in terms of quality, efficiency and energy change has become a top priority. Based on the digitalization background, this paper analyzes the transformation and upgrading efficiency of the manufacturing industry and evaluates the impact of the driving factors, which have very important theoretical and practical significance. This paper utilizes qualitative research methods, entropy methods, data envelopment analysis methods and econometric models to explore the transformation and upgrading efficiency of manufacturing enterprises and driving factors. The study shows that the transformation and upgrading efficiency of the manufacturing industry shows a steady increase, and regions rich in natural resources and social resources provide certain resources for transformation and upgrading. The ability of scientific and technological innovation has been improved, but there is still much room for progress in the transformation of scientific and technological innovation achievements. Most manufacturing industries pay more attention to green manufacturing and sustainable development. In addition, based on the existing problems, this paper puts forward suggestions for improving infrastructure construction, developing the technological innovation capacity of enterprises, green production and sustainable development.

Keywords: digitization, manufacturing firms, transformation and upgrading, efficiency, driving factors

Procedia PDF Downloads 53