Search results for: energy performance certificate EPBD
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19431

Search results for: energy performance certificate EPBD

9471 A Rational Intelligent Agent to Promote Metacognition a Situation of Text Comprehension

Authors: Anass Hsissi, Hakim Allali, Abdelmajid Hajami

Abstract:

This article presents the results of a doctoral research which aims to integrate metacognitive dimension in the design of human learning computing environments (ILE). We conducted a detailed study on the relationship between metacognitive processes and learning, specifically their positive impact on the performance of learners in the area of reading comprehension. Our contribution is to implement methods, using an intelligent agent based on BDI paradigm to ensure intelligent and reliable support for low readers, in order to encourage regulation and a conscious and rational use of their metacognitive abilities.

Keywords: metacognition, text comprehension EIAH, autoregulation, BDI agent

Procedia PDF Downloads 323
9470 Impact of Sports and Entertainment Marketing Strategies on the Professional Practices of Sports Managers in Nigeria

Authors: Ibraheem Musa Oluwatoyin, Olawuni Adisa, Abdulraheem Yinusa Owolabi

Abstract:

Nigeria's sports industry has grown, but ineffective management, inadequate marketing, and limited stakeholder engagement hinder progress. Effective marketing strategies are crucial, yet empirical research on their impact on Nigerian sports managers is scarce. This study investigates the impact of sports and entertainment marketing strategies on the professional practices of sports managers in Nigeria, employing a quantitative research design grounded in the Theory of Planned Behavior. The target population comprises 1,108 sports managers across various organizations in Nigeria, with a stratified random sample of 301 participants, ensuring representativeness based on organizational type (sports commissions/councils) and geographical zones. Data was collected using a structured questionnaire, which included sections on demographic information, the evaluation of marketing strategies, and their impact on decision-making, operational efficiency, stakeholder engagement, and performance. The questionnaire items were adapted from validated scales in marketing and sports management literature, achieving a Cronbach’s alpha of 0.85, indicating high internal consistency. Data collection occurred over eight weeks through both online and face-to-face surveys, ensuring ethical compliance with informed consent and data anonymization. Descriptive and inferential statistical methods, including Pearson Product Moment Correlation (PPMC), were employed for data analysis. The PPMC analyses revealed statistically significant relationships between digital platform marketing (r = 0.63, p = 0.000), sports marketing experience (r = 0.51, p = 0.000), and producing engaging sports content (r = 0.61, p = 0.000) with professional practices. These results suggest that digital platform marketing, sports marketing experience, and the creation of engaging content significantly enhance the effectiveness and performance of sports managers in Nigeria. The study contributes valuable insights for stakeholders in Nigeria’s sports industry, providing actionable recommendations for improving sports management practices through strategic marketing approaches.

Keywords: professional practice, digital platform, experience sports marketing, producing engaging sports content

Procedia PDF Downloads 11
9469 An Environmental Method for Renovation of Sewer Systems in Building Structures

Authors: Parastou Kharazmi

Abstract:

Degradation of building materials particularly pipelines causes environmental damage during the renovation or replacement, disturbance for people living in the buildings, is time-consuming and last but not least is very costly. Rehabilitation by composite materials is a solution for renovation of degraded pipeline in residential buildings and any other structures which is less costly, faster and causes less damage to the environment. This study provides a brief state of technology, methods, and materials which are being used in Nordic and some other European countries and an investigation on the performance of the relined pipes after they have been in working condition. The investigation was carried by different analyses in laboratory as well as numerous field inspections.

Keywords: buildings, pipeline, rehabilitation, polymer materials

Procedia PDF Downloads 243
9468 Synthesis and Application of an Organic Dye in Nanostructure Solar Cells Device

Authors: M. Hoseinnezhad, K. Gharanjig

Abstract:

Two organic dyes comprising carbazole as the electron donors and cyanoacetic acid moieties as the electron acceptors were synthesized. The organic dye was prepared by standard reaction from carbazole as the starting material. To this end, carbazole was reacted with bromobenzene and further oxidation and reacted with cyanoacetic acid. The obtained organic dye was purified and characterized using differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FT-IR), proton nuclear magnetic resonance (1HNMR), carbon nuclear magnetic resonance (13CNMR) and elemental analysis. The influence of heteroatom on carbazole donors and cyno substitution on the acid acceptor is evidenced by spectral and electrochemical photovoltaic experiments. Finally, light fastness properties for organic dye were investigated.

Keywords: dye-sensitized solar cells, indoline dye, nanostructure, oxidation potential, solar energy

Procedia PDF Downloads 197
9467 Effect of Magnetic Field on Unsteady MHD Poiseuille Flow of a Third Grade Fluid Under Exponential Decaying Pressure Gradient with Ohmic Heating

Authors: O. W. Lawal, L. O. Ahmed, Y. K. Ali

Abstract:

The unsteady MHD Poiseuille flow of a third grade fluid between two parallel horizontal nonconducting porous plates is studied with heat transfer. The two plates are fixed but maintained at different constant temperature with the Joule and viscous dissipation taken into consideration. The fluid motion is produced by a sudden uniform exponential decaying pressure gradient and external uniform magnetic field that is perpendicular to the plates. The momentum and energy equations governing the flow are solved numerically using Maple program. The effects of magnetic field and third grade fluid parameters on velocity and temperature profile are examined through several graphs.

Keywords: exponential decaying pressure gradient, MHD flow, Poiseuille flow, third grade fluid

Procedia PDF Downloads 486
9466 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique

Authors: Stefano Iannello, Massimiliano Materazzi

Abstract:

Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.

Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray

Procedia PDF Downloads 177
9465 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 254
9464 Concept-Based Assessment in Curriculum

Authors: Nandu C. Nair, Kamal Bijlani

Abstract:

This paper proposes a concept-based assessment to track the performance of the students. The idea behind this approach is to map the exam questions with the concepts learned in the course. So at the end of the course, each student will know how well he learned each concept. This system will give a self assessment for the students as well as instructor. By analyzing the score of all students, instructor can decide some concepts need to be teaching again or not. The system’s efficiency is proved using three courses from M-tech program in E-Learning technologies and results show that the concept-wise assessment improved the score in final exam of majority students on various courses.

Keywords: assessment, concept, examination, question, score

Procedia PDF Downloads 474
9463 In-Situ Quasistatic Compression and Microstructural Characterization of Aluminium Foams of Different Cell Topology

Authors: M. A. Islam, P. J. Hazell, J. P. Escobedo, M. Saadatfar

Abstract:

Quasistatic compression and micro structural characterization of closed cell aluminium foams of different pore size and cell distributions has been carried out. Metallic foams have good potential for lightweight structures for impact and blast mitigation and therefore it is important to find out the optimized foam structure (i.e. cell size, shape, relative density, and distribution) to maximize energy absorption. In this paper, we present results for two different aluminium metal foams of density 0.5 g/cc and 0.7 g/cc respectively that have been tested in quasi-static compression. The influence of cell geometry and cell topology on quasistatic compression behavior has been investigated using computed tomography (micro-CT) analysis. The compression behavior and micro structural characterization will be presented.

Keywords: metal foams, micro-CT, cell topology, quasistatic compression

Procedia PDF Downloads 460
9462 Polysaccharides as Pour Point Depressants

Authors: Ali M. EL-Soll

Abstract:

Physical properties of Sarir waxy crude oil was investigated, pour-point was determined using ASTM D-79 procedure, paraffin content and carbon number distribution of the paraffin was determined using gas liquid Chromatography(GLC), polymeric additives were prepared and their structures were confirmed using IR spectrophotometer. The molecular weight and molecular weigh distribution of these additives were determined by gel permeation chromatography (GPC). the performance of the synthesized additives as pour-point depressants was evaluated, for the mentioned crude oil.

Keywords: sarir, waxy, crude, pour point, depressants

Procedia PDF Downloads 457
9461 Linguistic Competencies of Students with Hearing Impairment

Authors: Munawar Malik, Muntaha Ahmad, Khalil Ullah Khan

Abstract:

Linguistic abilities in students with hearing impairment yet remain a concern for educationists. The emerging technological support and provisions in recent era vows to have addressed the situation and claims significant contribution in terms of linguistic repertoire. Being a descriptive and quantitative paradigm of study, the purpose of this research set forth was to assess linguistic competencies of students with hearing impairment in English language. The goals were further broken down to identify level of reading abilities in the subject population. The population involved students with HI studying at higher secondary level in Lahore. Simple random sampling technique was used to choose a sample of fifty students. A purposive curriculum-based assessment was designed in line with accelerated learning program by Punjab Government, to assess Linguistic competence among the sample. Further to it, an Informal Reading Inventory (IRI) corresponding to reading levels was also developed by researchers duly validated and piloted before the final use. Descriptive and inferential statistics were utilized to reach to the findings. Spearman’s correlation was used to find out relationship between degree of hearing loss, grade level, gender and type of amplification device. Independent sample t-test was used to compare means among groups. Major findings of the study revealed that students with hearing impairment exhibit significant deviation from the mean scores when compared in terms of grades, severity and amplification device. The study divulged that respective students with HI have yet failed to qualify an independent level of reading according to their grades as majority falls at frustration level of word recognition and passage comprehension. The poorer performance can be attributed to lower linguistic competence as it shows in the frustration levels of reading, writing and comprehension. The correlation analysis did reflect an improved performance grade wise, however scores could only correspond to frustration level and independent levels was never achieved. Reported achievements at instructional level of subject population may further to linguistic skills if practiced purposively.

Keywords: linguistic competence, hearing impairment, reading levels, educationist

Procedia PDF Downloads 73
9460 Upflow Anaerobic Sludge Blanket Reactor Followed by Dissolved Air Flotation Treating Municipal Sewage

Authors: Priscila Ribeiro dos Santos, Luiz Antonio Daniel

Abstract:

Inadequate access to clean water and sanitation has become one of the most widespread problems affecting people throughout the developing world, leading to an unceasing need for low-cost and sustainable wastewater treatment systems. The UASB technology has been widely employed as a suitable and economical option for the treatment of sewage in developing countries, which involves low initial investment, low energy requirements, low operation and maintenance costs, high loading capacity, short hydraulic retention times, long solids retention times and low sludge production. Whereas dissolved air flotation process is a good option for the post-treatment of anaerobic effluents, being capable of producing high quality effluents in terms of total suspended solids, chemical oxygen demand, phosphorus, and even pathogens. This work presents an evaluation and monitoring, over a period of 6 months, of one compact full-scale system with this configuration, UASB reactors followed by dissolved air flotation units (DAF), operating in Brazil. It was verified as a successful treatment system, and an issue of relevance since dissolved air flotation process treating UASB reactor effluents is not widely encompassed in the literature. The study covered the removal and behavior of several variables, such as turbidity, total suspend solids (TSS), chemical oxygen demand (COD), Escherichia coli, total coliforms and Clostridium perfringens. The physicochemical variables were analyzed according to the protocols established by the Standard Methods for Examination of Water and Wastewater. For microbiological variables, such as Escherichia coli and total coliforms, it was used the “pour plate” technique with Chromocult Coliform Agar (Merk Cat. No.1.10426) serving as the culture medium, while the microorganism Clostridium perfringens was analyzed through the filtering membrane technique, with the Ágar m-CP (Oxoid Ltda, England) serving as the culture medium. Approximately 74% of total COD was removed in the UASB reactor, and the complementary removal done during the flotation process resulted in 88% of COD removal from the raw sewage, thus the initial concentration of COD of 729 mg.L-1 decreased to 87 mg.L-1. Whereas, in terms of particulate COD, the overall removal efficiency for the whole system was about 94%, decreasing from 375 mg.L-1 in raw sewage to 29 mg.L-1 in final effluent. The UASB reactor removed on average 77% of the TSS from raw sewage. While the dissolved air flotation process did not work as expected, removing only 30% of TSS from the anaerobic effluent. The final effluent presented an average concentration of 38 mg.L-1 of TSS. The turbidity was significantly reduced, leading to an overall efficiency removal of 80% and a final turbidity of 28 NTU.The treated effluent still presented a high concentration of fecal pollution indicators (E. coli, total coliforms, and Clostridium perfringens), showing that the system did not present a good performance in removing pathogens. Clostridium perfringens was the organism which suffered the higher removal by the treatment system. The results can be considered satisfactory for the physicochemical variables, taking into account the simplicity of the system, besides that, it is necessary a post-treatment to improve the microbiological quality of the final effluent.

Keywords: dissolved air flotation, municipal sewage, UASB reactor, treatment

Procedia PDF Downloads 334
9459 A Method to Ease the Military Certification Process by Taking Advantage of Civil Standards in the Scope of Human Factors

Authors: Burcu Uçan

Abstract:

The certification approach differs in civil and military projects in aviation. Sets of criteria and standards created by airworthiness authorities for the determination of certification basis are distinct. While the civil standards are more understandable and clear because of not only include detailed specifications but also the help of guidance materials such as Advisory Circular, military criteria do not provide this level of guidance. Therefore, specifications that are more negotiable and sometimes more difficult to reconcile arise for the certification basis of a military aircraft. This study investigates a method of how to develop a military specification set by taking advantage of civil standards, regarding the European Military Airworthiness Criteria (EMACC) that establishes the airworthiness criteria for aircraft systems. Airworthiness Certification Criteria (MIL-HDBK-516C) is a handbook published for guidance that contains qualitative evaluation for military aircrafts meanwhile Certification Specifications (CS-29) is published for civil aircrafts by European Union Aviation Safety Agency (EASA). This method intends to compare and contrast specifications that MIL-HDBK-516C and CS-29 contain within the scope of Human Factors. Human Factors supports human performance and aims to improve system performance by encompassing knowledge from a range of scientific disciplines. Human Factors focuses on how people perform their tasks and reduce the risk of an accident occurring due to human physical and cognitive limitations. Hence, regardless of whether the project is civil or military, the specifications must be guided at a certain level by taking into account human limits. This study presents an advisory method for this purpose. The method in this study develops a solution for the military certification process by identifying the CS requirement corresponding to the criteria in the MIL-HDBK-516C by means of EMACC. Thus, it eases understanding the expectations of the criteria and establishing derived requirements. As a result of this method, it may not always be preferred to derive new requirements. Instead, it is possible to add remarks to make the expectancy of the criteria and required verification methods more comprehensible for all stakeholders. This study contributes to creating a certification basis for military aircraft, which is difficult and takes plenty of time for stakeholders to agree due to gray areas in the certification process for military aircrafts.

Keywords: human factors, certification, aerospace, requirement

Procedia PDF Downloads 81
9458 Lithium-Ion Battery State of Charge Estimation Using One State Hysteresis Model with Nonlinear Estimation Strategies

Authors: Mohammed Farag, Mina Attari, S. Andrew Gadsden, Saeid R. Habibi

Abstract:

Battery state of charge (SOC) estimation is an important parameter as it measures the total amount of electrical energy stored at a current time. The SOC percentage acts as a fuel gauge if it is compared with a conventional vehicle. Estimating the SOC is, therefore, essential for monitoring the amount of useful life remaining in the battery system. This paper looks at the implementation of three nonlinear estimation strategies for Li-Ion battery SOC estimation. One of the most common behavioral battery models is the one state hysteresis (OSH) model. The extended Kalman filter (EKF), the smooth variable structure filter (SVSF), and the time-varying smoothing boundary layer SVSF are applied on this model, and the results are compared.

Keywords: state of charge estimation, battery modeling, one-state hysteresis, filtering and estimation

Procedia PDF Downloads 448
9457 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 97
9456 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models

Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen

Abstract:

Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factors

Keywords: business models, business model innovation, digital transformation, SMEs

Procedia PDF Downloads 245
9455 A Novel Comparison Scheme for Thermal Conductivity Enhancement of Heat Transfer

Authors: Islam Tarek, Moataz Soliman

Abstract:

With the amazing development of nanoscience’s and the discovery of the unique properties of nanometric materials, the ideas of scientists and researchers headed to take advantage of this progress in various fields, and one of the most important of these areas is the field of heat transfer and benefit from it in saving energy used for heat transfer, so nanometric materials were used to improve the properties of heat transfer fluids and increase the efficiency of the liquid. In this paper, we will compare two types of heat transfer fluid, one industrial type (the base fluid is a mix of ethylene glycol and deionized water ) and another natural oils(the base fluid is a mix of jatropha oil and expired olive oil), explaining the method of preparing each of them, starting from the method of preparing CNT, collecting and sorting jatropha seeds, and the most appropriate method for extracting oil from them, and characterization the both of two fluids and when to use both.

Keywords: nanoscience, heat transfer, thermal conductivity, jatropha oil

Procedia PDF Downloads 224
9454 Algorithmic Fault Location in Complex Gas Networks

Authors: Soban Najam, S. M. Jahanzeb, Ahmed Sohail, Faraz Idris Khan

Abstract:

With the recent increase in reliance on Gas as the primary source of energy across the world, there has been a lot of research conducted on gas distribution networks. As the complexity and size of these networks grow, so does the leakage of gas in the distribution network. One of the most crucial factors in the production and distribution of gas is UFG or Unaccounted for Gas. The presence of UFG signifies that there is a difference between the amount of gas distributed, and the amount of gas billed. Our approach is to use information that we acquire from several specified points in the network. This information will be used to calculate the loss occurring in the network using the developed algorithm. The Algorithm can also identify the leakages at any point of the pipeline so we can easily detect faults and rectify them within minimal time, minimal efforts and minimal resources.

Keywords: FLA, fault location analysis, GDN, gas distribution network, GIS, geographic information system, NMS, network Management system, OMS, outage management system, SSGC, Sui Southern gas company, UFG, unaccounted for gas

Procedia PDF Downloads 632
9453 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 130
9452 Reliability Modeling of Repairable Subsystems in Semiconductor Fabrication: A Virtual Age and General Repair Framework

Authors: Keshav Dubey, Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta

Abstract:

In the semiconductor capital equipment industry, effective modeling of repairable system reliability is crucial for optimizing maintenance strategies and ensuring operational efficiency. However, repairable system reliability modeling using a renewal process is not as popular in the semiconductor equipment industry as it is in the locomotive and automotive industries. Utilization of this approach will help optimize maintenance practices. This paper presents a structured framework that leverages both parametric and non-parametric approaches to model the reliability of repairable subsystems based on operational data, maintenance schedules, and system-specific conditions. Data is organized at the equipment ID level, facilitating trend testing to uncover failure patterns and system degradation over time. For non-parametric modeling, the Mean Cumulative Function (Mean Cumulative Function) approach is applied, offering a flexible method to estimate the cumulative number of failures over time without assuming an underlying statistical distribution. This allows for empirical insights into subsystem failure behavior based on historical data. On the parametric side, virtual age modeling, along with Homogeneous and Non-Homogeneous Poisson Process (Homogeneous Poisson Process and Non-Homogeneous Poisson Process) models, is employed to quantify the effect of repairs and the aging process on subsystem reliability. These models allow for a more structured analysis by characterizing repair effectiveness and system wear-out trends over time. A comparison of various Generalized Renewal Process (GRP) approaches highlights their utility in modeling different repair effectiveness scenarios. These approaches provide a robust framework for assessing the impact of maintenance actions on system performance and reliability. By integrating both parametric and non-parametric methods, this framework offers a comprehensive toolset for reliability engineers to better understand equipment behavior, assess the effectiveness of maintenance activities, and make data-driven decisions that enhance system availability and operational performance in semiconductor fabrication facilities.

Keywords: reliability, maintainability, homegenous poission process, repairable system

Procedia PDF Downloads 34
9451 Study of Nanocrystalline Scintillator for Alpha Particles Detection

Authors: Azadeh Farzaneh, Mohammad Reza Abdi, A. Quaranta, Matteo Dalla Palma, Seyedshahram Mortazavi

Abstract:

We report on the synthesis of cesium-iodide nanoparticles using sol-gel technique. The structural properties of CsI nanoparticles were characterized by X-ray diffraction and Scanning Electron Microscope (SEM) Also, optical properties were followed by optical absorption and UV–vis fluorescence. Intense photoluminescence is also observed, with some spectral tuning possible with ripening time getting a range of emission photon wavelength approximately from 366 to 350 nm. The size effect on CsI luminescence leads to an increase in scintillation light yield, a redshift of the emission bands of the on_center and off_center self_trapped excitons (STEs) and an increase in the contribution of the off_center STEs to the net intrinsic emission yield. The energy transfer from the matrix to CsI nanoparticles is a key characteristic for scintillation detectors. So the scintillation spectra to alpha particles of sample were monitored.

Keywords: nanoparticles, luminescence, sol gel, scintillator

Procedia PDF Downloads 605
9450 Strongly Disordered Conductors and Insulators in Holography

Authors: Matthew Stephenson

Abstract:

We study the electrical conductivity of strongly disordered, strongly coupled quantum field theories, holographically dual to non-perturbatively disordered uncharged black holes. The computation reduces to solving a diffusive hydrostatic equation for an emergent horizon fluid. We demonstrate that a large class of theories in two spatial dimensions have a universal conductivity independent of disorder strength, and rigorously rule out disorder-driven conductor-insulator transitions in many theories. We present a (fine-tuned) axion-dilaton bulk theory which realizes the conductor-insulator transition, interpreted as a classical percolation transition in the horizon fluid. We address aspects of strongly disordered holography that can and cannot be addressed via mean-field modeling, such as massive gravity.

Keywords: theoretical physics, black holes, holography, high energy

Procedia PDF Downloads 183
9449 Implementation of ADETRAN Language Using Message Passing Interface

Authors: Akiyoshi Wakatani

Abstract:

This paper describes the Message Passing Interface (MPI) implementation of ADETRAN language, and its evaluation on SX-ACE supercomputers. ADETRAN language includes pdo statement that specifies the data distribution and parallel computations and pass statement that specifies the redistribution of arrays. Two methods for implementation of pass statement are discussed and the performance evaluation using Splitting-Up CG method is presented. The effectiveness of the parallelization is evaluated and the advantage of one dimensional distribution is empirically confirmed by using the results of experiments.

Keywords: iterative methods, array redistribution, translator, distributed memory

Procedia PDF Downloads 273
9448 Systematic and Simple Guidance for Feed Forward Design in Model Predictive Control

Authors: Shukri Dughman, Anthony Rossiter

Abstract:

This paper builds on earlier work which demonstrated that Model Predictive Control (MPC) may give a poor choice of default feed forward compensator. By first demonstrating the impact of future information of target changes on the performance, this paper proposes a pragmatic method for identifying the amount of future information on the target that can be utilised effectively in both finite and infinite horizon algorithms. Numerical illustrations in MATLAB give evidence of the efficacy of the proposal.

Keywords: model predictive control, tracking control, advance knowledge, feed forward

Procedia PDF Downloads 551
9447 Zinc Oxide Varistor Performance: A 3D Network Model

Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic

Abstract:

ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.

Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide

Procedia PDF Downloads 285
9446 Managing Inter-Organizational Innovation Project: Systematic Review of Literature

Authors: Lamin B Ceesay, Cecilia Rossignoli

Abstract:

Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.

Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review

Procedia PDF Downloads 118
9445 Chromia-Carbon Nanocomposite Materials for Energy Storage Devices

Authors: Muhammad A. Nadeem, Shaheed Ullah

Abstract:

The article reports the synthesis of Cr2O3/C nanocomposites obtained by the direct carbonization of PFA/MIL-101(Cr) bulk composite. The nanocomposites were characterized by various instrumental techniques like powder X-ray diffraction (PXRD), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and the surface characterized were investigated via N2 adsorption/desorption analysis. TEM and SAED analysis shows that turbostatic graphitic carbon was obtained with high crystallinity. The nanocomposites were tested for electrochemical supercapacitor and the faradic and non-Faradic processes were checked through cyclic voltammetry (CV). The maximum specific capacitance calculated for Cr2O3/C 900 sample from CV measurement is 301 F g-1 at 2 mV s-1 due to its maximum charge storing capacity as confirm by frequency response analysis.

Keywords: nanocomposites, transmission electron microscopy, non-faradic process

Procedia PDF Downloads 437
9444 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building

Authors: G. Wimmers

Abstract:

The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.

Keywords: air changes, airtightness, envelope design, industrial building, passive house

Procedia PDF Downloads 149
9443 A Study on Kinetic of Nitrous Oxide Catalytic Decomposition over CuO/HZSM-5

Authors: Y. J. Song, Q. S. Xu, X. C. Wang, H. Wang, C. Q. Li

Abstract:

The catalyst of copper oxide loaded on HZSM-5 was developed for nitrous oxide (N₂O) direct decomposition. The kinetic of nitrous oxide decomposition was studied for CuO/HZSM-5 catalyst prepared by incipient wetness impregnation method. The external and internal diffusion of catalytic reaction were considered in the investigation. Experiment results indicated that the external diffusion was basically eliminated when the reaction gas mixture gas hourly space velocity (GHSV) was higher than 9000h⁻¹ and the influence of the internal diffusion was negligible when the particle size of the catalyst CuO/HZSM-5 was small than 40-60 mesh. The experiment results showed that the kinetic of catalytic decomposition of N₂O was a first-order reaction and the activation energy and the pre-factor of the kinetic equation were 115.15kJ/mol and of 1.6×109, respectively.

Keywords: catalytic decomposition, CuO/HZSM-5, kinetic, nitrous oxide

Procedia PDF Downloads 194
9442 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule

Authors: Yuanxiaoyue Yang

Abstract:

Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.

Keywords: air pollution, cap-and-trade, emissions trading, environmental justice

Procedia PDF Downloads 157