Search results for: performance prism model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25959

Search results for: performance prism model

19599 Analyzing the Impact of Board Diversity on Firm Performance: Case Study of the Nigerian Banking Sector

Authors: Data Collete Bob-Manuel

Abstract:

In light of global financial crisis in 2007-2008 various factors including board diversity, succession planning and board evaluation have been identified as essential ingredients in ensuring board effectiveness. The composition and structure of the board is of outmost importance in assessing a board’s ability and success in achieving its objectives. Following the corporate frauds and accounting scandals such as Enron, WorldCom, Parmalat, Oceanic Bank Nigeria and AfriBank Nigeria, there has been a notable amount of research about the effectiveness of the board of directors in the corporate governance of firms. The need to have an effective board cannot be over emphasized as it results in a more stable and thriving company. There has been an overarching need in the business world for a more diverse workforce and board of directors. Big corporations like Texaco, Ford Motors and DuPont have stated how diversity at every level of the workforce including the board of directors has been cited as a vital element for a company to succeed. Developed countries are also seeking for companies to have a more diverse board. For instance Norway has implemented a 60:40 board ratio to all companies. In West Africa, particularly Nigeria, the topic of diversity has received little attention as most studies conducted have focused on the gender aspect of diversity, which results found to have a negative impact on firm performance. This paper seeks to examine four variables of diversity; age, ethnicity, gender and skills to weigh the positive or negative impact the variables have on firm performance, based on evidence from the Nigerian Financial sector. Information used for this study will be gathered from financial statements and annual reports so as to enable the researcher to reflect on past years to know what is being done differently today. The findings of this study will help the researcher to develop a working definition for ethnicity with regards to the West African context where the issue of “tribe” is a sensitive topic.

Keywords: Board of Directors, Board Diversity, Firm Performance, Nigeria

Procedia PDF Downloads 365
19598 Determining a Sustainability Business Model Using Materiality Matrices in an Electricity Bus Factory

Authors: Ozcan Yavas, Berrak Erol Nalbur, Sermin Gunarslan

Abstract:

A materiality matrix is a tool that organizations use to prioritize their activities and adapt to the increasing sustainability requirements in recent years. For the materiality index to move from business models to the sustainability business model stage, it must be done with all partners in the raw material, supply, production, product, and end-of-life product stages. Within the scope of this study, the Materiality Matrix was used to transform the business model into a sustainability business model and to create a sustainability roadmap in a factory producing electric buses. This matrix determines the necessary roadmap for all stakeholders to participate in the process, especially in sectors that produce sustainable products, such as the electric vehicle sector, and to act together with the cradle-to-cradle approach of sustainability roadmaps. Global Reporting Initiative analysis was used in the study conducted with 1150 stakeholders within the scope of the study, and 43 questions were asked to the stakeholders under the main headings of 'Legal Compliance Level,' 'Environmental Strategies,' 'Risk Management Activities,' 'Impact of Sustainability Activities on Products and Services,' 'Corporate Culture,' 'Responsible and Profitable Business Model Practices' and 'Achievements in Leading the Sector' and Economic, Governance, Environment, Social and Other. The results of the study aimed to include five 1st priority issues and four 2nd priority issues in the sustainability strategies of the organization in the short and medium term. When the studies carried out in the short term are evaluated in terms of Sustainability and Environmental Risk Management, it is seen that the studies are still limited to the level of legal legislation (60%) and individual studies in line with the strategies (20%). At the same time, the stakeholders expect the company to integrate sustainability activities into its business model within five years (35%) and to carry out projects to become the first company that comes to mind with its success leading the sector (20%). Another result obtained within the study's scope is identifying barriers to implementation. It is seen that the most critical obstacles identified by stakeholders with climate change and environmental impacts are financial deficiency and lack of infrastructure in the dissemination of sustainable products. These studies are critical for transitioning to sustainable business models for the electric vehicle sector to achieve the EU Green Deal and CBAM targets.

Keywords: sustainability business model, materiality matrix, electricity bus, carbon neutrality, sustainability management

Procedia PDF Downloads 34
19597 Modelling the Yield Stress of Magnetorheological Fluids

Authors: Hesam Khajehsaeid, Naeimeh Alagheband

Abstract:

Magnetorheological fluids (MRF) are a category of smart materials. They exhibit a reversible change from a Newtonian-like fluid to a semi-solid state upon application of an external magnetic field. In contrast to ordinary fluids, MRFs can tolerate shear stresses up to a threshold value called yield stress which strongly depends on the strength of the magnetic field, magnetic particles volume fraction and temperature. Even beyond the yield, a magnetic field can increase MR fluid viscosity up to several orders. As yield stress is an important parameter in the design of MR devices, in this work, the effects of magnetic field intensity and magnetic particle concentration on the yield stress of MRFs are investigated. Four MRF samples with different particle concentrations are developed and tested through flow-ramp analysis to obtain the flow curves at a range of magnetic field intensity as well as shear rate. The viscosity of the fluids is determined by means of the flow curves. The results are then used to determine the yield stresses by means of the steady stress sweep method. The yield stresses are then determined by means of a modified form of the dipole model as well as empirical models. The exponential distribution function is used to describe the orientation of particle chains in the dipole model under the action of the external magnetic field. Moreover, the modified dipole model results in a reasonable distribution of chains compared to previous similar models.

Keywords: magnetorheological fluids, yield stress, particles concentration, dipole model

Procedia PDF Downloads 163
19596 Improve Heat Pipe Thermal Performance in H-VAC Systems Using CFD Modeling

Authors: H. Shokouhmand, A. Ghanami

Abstract:

A heat pipe is simple heat transfer device which combines the conduction and phase change phenomena to control the heat transfer without any need for external power source. At a hot surface of the heat pipe, the liquid phase absorbs heat and changes to the vapor phase. The vapor phase flows to condenser region and with the loss of heat changes to the liquid phase. Due to gravitational force the liquid phase flows to the evaporator section. In HVAC systems, the working fluid is chosen based on the operating temperature. The heat pipe has significant capability to reduce the humidity in HVAC systems. Each HVAC system which uses the heater, humidifier, or dryer is a suitable nominate for the utilization of heat pipes. Generally, heat pipes have three main sections: condenser, adiabatic region, and evaporator. Performance investigation and optimization of heat pipes operation in order to increase their efficiency is crucial. In the present article, a parametric study is performed to improve the heat pipe performance. Therefore, the heat capacity of the heat pipe with respect to geometrical and confining parameters is investigated. For the better observation of heat pipe operation in HVAC systems, a CFD simulation in Eulerian-Eulerian multiphase approach is also performed. The results show that heat pipe heat transfer capacity is higher for water as working fluid with the operating temperature of 340 K. It is also showed that the vertical orientation of heat pipe enhances its heat transfer capacity.

Keywords: heat pipe, HVAC system, grooved heat pipe, heat pipe limits

Procedia PDF Downloads 422
19595 A Constrained Model Predictive Control Scheme for Simultaneous Control of Temperature and Hygrometry in Greenhouses

Authors: Ayoub Moufid, Najib Bennis, Soumia El Hani

Abstract:

The objective of greenhouse climate control is to improve the culture development and to minimize the production costs. A greenhouse is an open system to external environment and the challenge is to regulate the internal climate despite the strong meteorological disturbances. The internal state of greenhouse considered in this work is defined by too relevant and coupled variables, namely inside temperature and hygrometry. These two variables are chosen to describe the internal state of greenhouses due to their importance in the development of plants and their sensitivity to external climatic conditions, sources of weather disturbances. A multivariable model is proposed and validated by considering a greenhouse as black-box system and the least square method is applied to parameters identification basing on collected experimental measures. To regulate the internal climate, we propose a Model Predictive Control (MPC) scheme. This one considers the measured meteorological disturbances and the physical and operational constraints on the control and state variables. A successful feasibility study of the proposed controller is presented, and simulation results show good performances despite the high interaction between internal and external variables and the strong external meteorological disturbances. The inside temperature and hygrometry are tracking nearly the desired trajectories. A comparison study with an On/Off control applied to the same greenhouse confirms the efficiency of the MPC approach to inside climate control.

Keywords: climate control, constraints, identification, greenhouse, model predictive control, optimization

Procedia PDF Downloads 194
19594 Construction and Performance of Nanocomposite-Based Electrochemical Biosensor

Authors: Jianfang Wang, Xianzhe Chen, Zhuoliang Liu, Cheng-An Tao, Yujiao Li

Abstract:

Organophosphorus (OPs) pesticide used as insecticides are widely used in agricultural pest control, household and storage deworming. The detection of pesticides needs more simple and efficient methods. One of the best ways is to make electrochemical biosensors. In this paper, an electrochemical enzyme biosensor based on acetylcholine esterase (AChE) was constructed, and its sensing properties and sensing mechanisms were studied. Reduced graphene oxide-polydopamine complexes (RGO-PDA), gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) were prepared firstly and composited with AChE and chitosan (CS), then fixed on the glassy carbon electrode (GCE) surface to construct the biosensor GCE/RGO-PDA-AuNPs-AgNPs-AChE-CS by one-pot method. The results show that graphene oxide (GO) can be reduced by dopamine (DA) and dispersed well in RGO-PDA complexes. And the composites have a synergistic catalysis effect and can improve the surface resistance of GCE. The biosensor selectively can detect acetylcholine (ACh) and OPs pesticide with good linear range and high sensitivity. The performance of the biosensor is affected by the ratio and adding ways of AChE and the adding of AuNPs and AChE. And the biosensor can achieve a detection limit of 2.4 ng/L for methyl parathion and a wide linear detection range of 0.02 ng/L ~ 80 ng/L, and has excellent stability, good anti-interference ability, and excellent preservation performance, indicating that the sensor has practical value.

Keywords: acetylcholine esterase, electrochemical biosensor, nanoparticles, organophosphates, reduced graphene oxide

Procedia PDF Downloads 97
19593 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 254
19592 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 79
19591 Effects Induced by Dispersion-Promoting Cylinder on Fiber-Concentration Distributions in Pulp Suspension Flows

Authors: M. Sumida, T. Fujimoto

Abstract:

Fiber-concentration distributions in pulp liquid flows behind dispersion promoters were experimentally investigated to explore the feasibility of improving operational performance of hydraulic headboxes in papermaking machines. The proposed research was performed in the form of a basic test conducted on a screen-type model comprising a circular cylinder inserted within a channel. Tests were performed using pulp liquid possessing fiber concentrations ranging from 0.3-1.0 wt% under different flow velocities of 0.016-0.74 m/s. Fiber-concentration distributions were measured using the transmitted light attenuation method. Obtained test results were analyzed, and the influence of the flow velocities on wake characteristics behind the cylinder has been investigated with reference to findings of our preceding studies concerning pulp liquid flows in straight channels. Changes in fiber-concentration distribution along the flow direction were observed to be substantially large in the section from the cylinder to four times its diameter downstream of its centerline. Findings of this study provide useful information concerning the development of hydraulic headboxes.

Keywords: dispersion promoter, fiber-concentration distribution, hydraulic headbox, pulp liquid flow

Procedia PDF Downloads 326
19590 Electricity Production from Vermicompost Liquid Using Microbial Fuel Cell

Authors: Pratthana Ammaraphitak, Piyachon Ketsuwan, Rattapoom Prommana

Abstract:

Electricity production from vermicompost liquid was investigated in microbial fuel cells (MFCs). The aim of this study was to determine the performance of vermicompost liquid as a biocatalyst for electricity production by MFCs. Chemical and physical parameters of vermicompost liquid as total nitrogen, ammonia-nitrogen, nitrate, nitrite, total phosphorus, potassium, organic matter, C:N ratio, pH, and electrical conductivity in MFCs were studied. The performance of MFCs was operated in open circuit mode for 7 days. The maximum open circuit voltage (OCV) was 0.45 V. The maximum power density of 5.29 ± 0.75 W/m² corresponding to a current density of 0.024 2 ± 0.0017 A/m² was achieved by the 1000 Ω on day 2. Vermicompost liquid has efficiency to generate electricity from organic waste.

Keywords: vermicompost liquid, microbial fuel cell, nutrient, electricity production

Procedia PDF Downloads 166
19589 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 60
19588 Study of a Lean Premixed Combustor: A Thermo Acoustic Analysis

Authors: Minoo Ghasemzadeh, Rouzbeh Riazi, Shidvash Vakilipour, Alireza Ramezani

Abstract:

In this study, thermo acoustic oscillations of a lean premixed combustor has been investigated, and a mono-dimensional code was developed in this regard. The linearized equations of motion are solved for perturbations with time dependence〖 e〗^iwt. Two flame models were considered in this paper and the effect of mean flow and boundary conditions were also investigated. After manipulation of flame heat release equation together with the equations of flow perturbation within the main components of the combustor model (i.e., plenum/ premixed duct/ and combustion chamber) and by considering proper boundary conditions between the components of model, a system of eight homogeneous equations can be obtained. This simplification, for the main components of the combustor model, is convenient since low frequency acoustic waves are not affected by bends. Moreover, some elements in the combustor are smaller than the wavelength of propagated acoustic perturbations. A convection time is also assumed to characterize the required time for the acoustic velocity fluctuations to travel from the point of injection to the location of flame front in the combustion chamber. The influence of an extended flame model on the acoustic frequencies of combustor was also investigated, assuming the effect of flame speed as a function of equivalence ratio perturbation, on the rate of flame heat release. The abovementioned system of equations has a related eigenvalue equation which has complex roots. The sign of imaginary part of these roots determines whether the disturbances grow or decay and the real part of these roots would give the frequency of the modes. The results show a reasonable agreement between the predicted values of dominant frequencies in the present model and those calculated in previous related studies.

Keywords: combustion instability, dominant frequencies, flame speed, premixed combustor

Procedia PDF Downloads 369
19587 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 220
19586 Continuous-Time Analysis And Performance Assessment For Digital Control Of High-Frequency Switching Synchronous Dc-Dc Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Ouafae Bennis, Sakina Zerouali

Abstract:

This paper features a performance analysis and robustness assessment of a digitally controlled DC-DC three-cell buck converter associated in parallel, operating in continuous conduction mode (CCM), facing feeding parameters variation and loads disturbance. The control strategy relies on the continuous-time with an averaged modeling technique for high-frequency switching converter. The methodology is to modulate the complete design procedure, in regard to the existence of an instantaneous current operating point for designing the digital closed-loop, to the same continuous-time domain. Moreover, the adopted approach is to include a digital voltage control (DVC) technique, taking an account for digital control delays and sampling effects, which aims at improving efficiency and dynamic response and preventing generally undesired phenomena. The results obtained under load change, input change, and reference change clearly demonstrates an excellent dynamic response of the proposed technique, also as provide stability in any operating conditions, the effectiveness is fast with a smooth tracking of the specified output voltage. Simulations studies in MATLAB/Simulink environment are performed to verify the concept.

Keywords: continuous conduction mode, digital control, parallel multi-cells converter, performance analysis, power electronics

Procedia PDF Downloads 137
19585 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment

Authors: Rouzbeh Jafari, Joe Nava

Abstract:

This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.

Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy

Procedia PDF Downloads 85
19584 Refurbishment Methods to Enhance Energy Efficiency of Brick Veneer Residential Buildings in Victoria

Authors: Hamid Reza Tabatabaiefar, Bita Mansoury, Mohammad Javad Khadivi Zand

Abstract:

The current energy and climate change impacts of the residential building sector in Australia are significant. Thus, the Australian Government has introduced more stringent regulations to improve building energy efficiency. In 2006, the Australian residential building sector consumed about 11% (around 440 Petajoule) of the total primary energy, resulting in total greenhouse gas emissions of 9.65 million tonnes CO2-eq. The gas and electricity consumption of residential dwellings contributed to 30% and 52% respectively, of the total primary energy utilised by this sector. Around 40 percent of total energy consumption of Australian buildings goes to heating and cooling due to the low thermal performance of the buildings. Thermal performance of buildings determines the amount of energy used for heating and cooling of the buildings which profoundly influences energy efficiency. Employing sustainable design principles and effective use of construction materials can play a crucial role in improving thermal performance of new and existing buildings. Even though awareness has been raised, the design phase of refurbishment projects is often problematic. One of the issues concerning the refurbishment of residential buildings is mostly the consumer market, where most work consists of moderate refurbishment jobs, often without assistance of an architect and partly without a building permit. There is an individual and often fragmental approach that results in lack of efficiency. Most importantly, the decisions taken in the early stages of the design determine the final result; however, the assessment of the environmental performance only happens at the end of the design process, as a reflection of the design outcome. Finally, studies have identified the lack of knowledge, experience and best-practice examples as barriers in refurbishment projects. In the context of sustainable development and the need to reduce energy demand, refurbishing the ageing residential building constitutes a necessary action. Not only it does provide huge potential for energy savings, but it is also economically and socially relevant. Although the advantages have been identified, the guidelines come in the form of general suggestions that fail to address the diversity of each project. As a result, it has been recognised that there is a strong need to develop guidelines for optimised retrofitting of existing residential buildings in order to improve their energy performance. The current study investigates the effectiveness of different energy retrofitting techniques and examines the impact of employing those methods on energy consumption of residential brick veneer buildings in Victoria (Australia). Proposing different remedial solutions for improving the energy performance of residential brick veneer buildings, in the simulation stage, annual energy usage analyses have been carried out to determine heating and cooling energy consumptions of the buildings for different proposed retrofitting techniques. Then, the results of employing different retrofitting methods have been examined and compared in order to identify the most efficient and cost-effective remedial solution for improving the energy performance of those buildings with respect to the climate condition in Victoria and construction materials of the studied benchmark building.

Keywords: brick veneer residential buildings, building energy efficiency, climate change impacts, cost effective remedial solution, energy performance, sustainable design principles

Procedia PDF Downloads 278
19583 Towards Innovation Performance among University Staff

Authors: Cheng Sim Quah, Sandra Phek Lin Sim

Abstract:

This study examined how individuals in their respective teams contributed to innovation performance besides defining the term of innovation in their own respective views. This study also identified factors that motivated University staff to contribute to the innovation products. In addition, it examined whether there is a significant relationship between professional training level and the length of service among university staff towards innovation and to what extent do the two variables contributed towards innovative products. The significance of this study is that it revealed the strengths and weaknesses of the university staff when contributing to innovation performance. Stratified-random sampling was employed to determine the samples representing the population of lecturers in the study, involving 123 lecturers in one of the local universities in Malaysia. The method employed to analyze the data is through categorizing into themes for the open-ended questions besides using descriptive and inferential statistics for the quantitative data. This study revealed that two types of definition for the term “innovation” exist among the university staff, namely, creation of new product or new approach to do things as well as value-added creative way to upgrade or improve existing process and service to be more efficient. This study found that the most prominent factor that propel them towards innovation is to improve the product in order to benefit users, followed by self-satisfaction and recognition. This implies that the staff in the organization viewed the creation of innovative products as a process of growth to fulfill the needs of others and also to realize their personal potential. This study also found that there was only a significant relationship between the professional training level and the length of service of 4-6 years among the university staff. The rest of the groups based on the length of service showed that there was no significant relationship with the professional training level towards innovation. Moreover, results of the study on directional measures depicted that the relationship for the length of service of 4 - 6 years with professional training level among the university staff is quite weak. This implies that good organization management lies on the shoulders of the key leaders who enlighten the path to be followed by the staff.

Keywords: innovation, length of service, performance, professional training level, motivation

Procedia PDF Downloads 304
19582 Analytical Terahertz Characterization of In0.53Ga0.47As Transistors and Homogenous Diodes

Authors: Abdelmadjid Mammeri, Fatima Zohra Mahi, Luca Varani, H. Marinchoi

Abstract:

We propose an analytical model for the admittance and the noise calculations of the InGaAs transistor and diode. The development of the small-signal admittance takes into account the longitudinal and transverse electric fields through a pseudo two-dimensional approximation of the Poisson equation. The frequency-dependent of the small-signal admittance response is determined by the total currents and the potentials matrix relation between the gate and the drain terminals. The noise is evaluated by using the real part of the transistor/diode admittance under a small-signal perturbation. The analytical results show that the admittance spectrum exhibits a series of resonant peaks corresponding to the excitation of plasma waves. The appearance of the resonance is discussed and analyzed as functions of the channel length and the temperature. The model can be used, on one hand; to control the appearance of the plasma resonances, and on other hand; can give significant information about the noise frequency dependence in the InGaAs transistor and diode.

Keywords: InGaAs transistors, InGaAs diode, admittance, resonant peaks, plasma waves, analytical model

Procedia PDF Downloads 294
19581 Nonlinear Finite Element Modeling of Reinforced Concrete Flat Plate-Inclined Column Connection

Authors: Rabab Allouzi, Amer Alkloub

Abstract:

As the complex shaped buildings become a popular trend for architects, this paper is presented to investigate the performance of reinforced concrete flat plate-inclined column connection. The studies on the inclined column and flat plate connections are not sufficient in comparison to those on the conventional structures. The effect of column angle of inclination on the punching shear strength is found significant and studied herein. This paper presents a non-linear finite element based modeling approach to estimate behavior of RC flat plate inclined column connection. Results from simulations of RC flat plate-straight column connection show good agreement with experimental response of specimens tested by other researchers. The model is further used to study the response of inclined columns to punching at various ranges of inclination angles. The inclination angle can be included in the punching shear strength provisions provided by ACI 318-14 to account for the effect of column inclination.

Keywords: punching shear, non-linear finite element, inclined columns, reinforced concrete connection

Procedia PDF Downloads 230
19580 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.

Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense

Procedia PDF Downloads 123
19579 Dual-Layer Microporous Layer of Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Various RH Conditions

Authors: Grigoria Athanasaki, Veerarajan Vimala, A. M. Kannan, Louis Cindrella

Abstract:

Energy usage has been increased throughout the years, leading to severe environmental impacts. Since the majority of the energy is currently produced from fossil fuels, there is a global need for clean energy solutions. Proton Exchange Membrane Fuel Cells (PEMFCs) offer a very promising solution for transportation applications because of their solid configuration and low temperature operations, which allows them to start quickly. One of the main components of PEMFCs is the Gas Diffusion Layer (GDL), which manages water and gas transport and shows direct influence on the fuel cell performance. In this work, a novel dual-layer GDL with gradient porosity was prepared, using polyethylene glycol (PEG) as pore former, to improve the gas diffusion and water management in the system. The microporous layer (MPL) of the fabricated GDL consists of carbon powder PUREBLACK, sodium dodecyl sulfate as a surfactant, 34% wt. PTFE and the gradient porosity was created by applying one layer using 30% wt. PEG on the carbon substrate, followed by a second layer without using any pore former. The total carbon loading of the microporous layer is ~ 3 mg.cm-2. For the assembly of the catalyst layer, Nafion membrane (Ion Power, Nafion Membrane NR211) and Pt/C electrocatalyst (46.1% wt.) were used. The catalyst ink was deposited on the membrane via microspraying technique. The Pt loading is ~ 0.4 mg.cm-2, and the active area is 5 cm2. The sample was ex-situ characterized via wetting angle measurement, Scanning Electron Microscopy (SEM), and Pore Size Distribution (PSD) to evaluate its characteristics. Furthermore, for the performance evaluation in-situ characterization via Fuel Cell Testing using H2/O2 and H2/air as reactants, under 50, 60, 80, and 100% relative humidity (RH), took place. The results were compared to a single layer GDL, fabricated with the same carbon powder and loading as the dual layer GDL, and a commercially available GDL with MPL (AvCarb2120). The findings reveal high hydrophobic properties of the microporous layer of the GDL for both PUREBLACK based samples, while the commercial GDL demonstrates hydrophilic behavior. The dual layer GDL shows high and stable fuel cell performance under all the RH conditions, whereas the single layer manifests a drop in performance at high RH in both oxygen and air, caused by catalyst flooding. The commercial GDL shows very low and unstable performance, possibly because of its hydrophilic character and thinner microporous layer. In conclusion, the dual layer GDL with PEG appears to have improved gas diffusion and water management in the fuel cell system. Due to its increasing porosity from the catalyst layer to the carbon substrate, it allows easier access of the reactant gases from the flow channels to the catalyst layer, and more efficient water removal from the catalyst layer, leading to higher performance and stability.

Keywords: gas diffusion layer, microporous layer, proton exchange membrane fuel cells, relative humidity

Procedia PDF Downloads 112
19578 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 460
19577 The Use of Empirical Models to Estimate Soil Erosion in Arid Ecosystems and the Importance of Native Vegetation

Authors: Meshal M. Abdullah, Rusty A. Feagin, Layla Musawi

Abstract:

When humans mismanage arid landscapes, soil erosion can become a primary mechanism that leads to desertification. This study focuses on applying soil erosion models to a disturbed landscape in Umm Nigga, Kuwait, and identifying its predicted change under restoration plans, The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the Demilitarized Zone (DMZ) adjacent to Iraq, and has been fenced off to restrict public access since 1994. The central objective of this project was to utilize GIS and remote sensing to compare the MPSIAC (Modified Pacific South West Inter Agency Committee), EMP (Erosion Potential Method), and USLE (Universal Soil Loss Equation) soil erosion models and determine their applicability for arid regions such as Kuwait. Spatial analysis was used to develop the necessary datasets for factors such as soil characteristics, vegetation cover, runoff, climate, and topography. Results showed that the MPSIAC and EMP models produced a similar spatial distribution of erosion, though the MPSIAC had more variability. For the MPSIAC model, approximately 45% of the land surface ranged from moderate to high soil loss, while 35% ranged from moderate to high for the EMP model. The USLE model had contrasting results and a different spatial distribution of the soil loss, with 25% of area ranging from moderate to high erosion, and 75% ranging from low to very low. We concluded that MPSIAC and EMP were the most suitable models for arid regions in general, with the MPSIAC model best. We then applied the MPSIAC model to identify the amount of soil loss between coastal and desert areas, and fenced and unfenced sites. In the desert area, soil loss was different between fenced and unfenced sites. In these desert fenced sites, 88% of the surface was covered with vegetation and soil loss was very low, while at the desert unfenced sites it was 3% and correspondingly higher. In the coastal areas, the amount of soil loss was nearly similar between fenced and unfenced sites. These results implied that vegetation cover played an important role in reducing soil erosion, and that fencing is much more important in the desert ecosystems to protect against overgrazing. When applying the MPSIAC model predictively, we found that vegetation cover could be increased from 3% to 37% in unfenced areas, and soil erosion could then decrease by 39%. We conclude that the MPSIAC model is best to predict soil erosion for arid regions such as Kuwait.

Keywords: soil erosion, GIS, modified pacific South west inter agency committee model (MPSIAC), erosion potential method (EMP), Universal soil loss equation (USLE)

Procedia PDF Downloads 282
19576 The Threshold Values of Soil Water Index for Landslides on Country Road No.89

Authors: Ji-Yuan Lin, Yu-Ming Liou, Yi-Ting Chen, Chen-Syuan Lin

Abstract:

Soil water index obtained by tank model is now commonly used in soil and sand disaster alarm system in Japan. Comparing with the rainfall trigging index in Taiwan, the tank model is easy to predict the slope water content on large-scale landslide. Therefore, this study aims to estimate the threshold value of large-scale landslide using the soil water index Sixteen typhoons and heavy rainfall events, were selected to establish the, to relationship between landslide event and soil water index. Finally, the proposed threshold values for landslides on country road No.89 are suggested in this study. The study results show that 95% landslide cases occurred in soil water index more than 125mm, and 30% of the more serious slope failure occurred in the soil water index is greater than 250mm. Beside, this study speculates when soil water index more than 250mm and the difference value between second tank and third tank less than -25mm, it leads to large-scale landslide more probably.

Keywords: soil water index, tank model, landslide, threshold values

Procedia PDF Downloads 371
19575 Ultra Reliable Communication: Availability Analysis in 5G Cellular Networks

Authors: Yosra Benchaabene, Noureddine Boujnah, Faouzi Zarai

Abstract:

To meet the growing demand of users, the fifth generation (5G) will continue to provide services to higher data rates with higher carrier frequencies and wider bandwidths. As part of the 5G communication paradigm, Ultra Reliable Communication (URC) is envisaged as an important technology pillar for providing anywhere and anytime services to end users. Ultra Reliable Communication (URC) is considered an important technology that why it has become an active research topic. In this work, we analyze the availability of a service in the space domain. We characterize spatially available areas consisting of all locations that meet a performance requirement with confidence, and we define cell availability and system availability, individual user availability, and user-oriented system availability. Poisson point process (PPP) and Voronoi tessellation are adopted to model the spatial characteristics of a cell deployment in heterogeneous networks. Numerical results are presented, also highlighting the effect of different system parameters on the achievable link availability.

Keywords: URC, dependability and availability, space domain analysis, Poisson point process, Voronoi Tessellation

Procedia PDF Downloads 106
19574 The Impact of Hybrid Working Models on Employee Engagement

Authors: Sibylle Tellenbach, Julie Haddock-Millar, Francis Bidault

Abstract:

The aim of this research is to understand the extent to which hybrid working models have influenced employee engagement in the Swiss financial sector. The context for this research is the transition out of the pandemic and the changes that have occurred between 2020 and 2023. Since the pandemic, many financial services companies have had to rethink their working model for office-based employees, as this group of employees has been able to experience a new way of working and, thus, greater freedom and flexibility. For a large number of companies, it was a huge change to shift from the traditional office-based to a new hybrid working model. A heightened focus on employee engagement has become a necessity in order to understand and respond to the challenges presented by the shift in a working model. This new way of working, partly office-based and partly virtual, has led to ambiguities about the impact on the engagement of hybrid teams. Therefore, the research question is: How hybrid working models have influenced employee engagement to what extent? The methodological approach is a narrative inquiry with four similar functional teams within four Swiss financial companies. Semi-structured interviews will be conducted with managers from middle management and their individual team members. The findings will demonstrate whether this shift in the working model influenced individual team members’ engagement and to what extent. The contribution of this research is two-fold. First, the research makes a theoretical contribution, presenting evidence of the impact of hybrid working on individual team members’ engagement in a specific sector and context, enhancing current knowledge on the challenges in working model transition. Second, this research will make a practice-based contribution, recommending ways to enhance the engagement of hybrid teams in a specific context. These recommendations may be applied in wider sectors and teams.

Keywords: employee engagement, hybrid teams, hybrid working models, Swiss financial sector, team engagement

Procedia PDF Downloads 77
19573 Psychometric Properties of the Secondary School Stressor Questionnaire among Adolescents at Five Secondary Schools

Authors: Muhamad Saiful Bahri Yusoff

Abstract:

This study aimed to evaluate the construct, convergent, and discriminant validity of the Secondary School Stressor Questionnaire (3SQ) as well as to evaluate its internal consistency among adolescents in Malaysian secondary schools. A cross-sectional study was conducted on 700 secondary school students in five secondary schools. Stratified random sampling was used to select schools and participants. The confirmatory factor analysis was performed by AMOS to examine construct, convergent, and discriminant validity. The reliability analysis was performed by SPSS to determine internal consistency. The results showed that the original six-factor model with 44 items failed to achieve acceptable values of the goodness of fit indices, suggesting poor model fit. The new five-factor model of 3SQ with 22 items demonstrated acceptable level of goodness of fit indices to signify a model fit. The overall Cronbach’s alpha value for the new version 3SQ was 0.93, while the five constructs ranged from 0.68 to 0.94. The composite reliability values of each construct ranged between 0.68 and 0.93, indicating satisfactory to high level of convergent validity. Our study did not support the construct validity of the original version of 3SQ. We found the new version 3SQ showed more convincing evidence of validity and reliability to measure stressors of adolescents. Continued research is needed to verify and maximize the psychometric credentials of 3SQ across countries.

Keywords: stressors, adolescents, secondary school students, 3SQ, psychometric properties

Procedia PDF Downloads 377
19572 Extended Constraint Mask Based One-Bit Transform for Low-Complexity Fast Motion Estimation

Authors: Oğuzhan Urhan

Abstract:

In this paper, an improved motion estimation (ME) approach based on weighted constrained one-bit transform is proposed for block-based ME employed in video encoders. Binary ME approaches utilize low bit-depth representation of the original image frames with a Boolean exclusive-OR based hardware efficient matching criterion to decrease computational burden of the ME stage. Weighted constrained one-bit transform (WC‑1BT) based approach improves the performance of conventional C-1BT based ME employing 2-bit depth constraint mask instead of a 1-bit depth mask. In this work, the range of constraint mask is further extended to increase ME performance of WC-1BT approach. Experiments reveal that the proposed method provides better ME accuracy compared existing similar ME methods in the literature.

Keywords: fast motion estimation; low-complexity motion estimation, video coding

Procedia PDF Downloads 305
19571 The Curvature of Bending Analysis and Motion of Soft Robotic Fingers by Full 3D Printing with MC-Cells Technique for Hand Rehabilitation

Authors: Chaiyawat Musikapan, Ratchatin Chancharoen, Saknan Bongsebandhu-Phubhakdi

Abstract:

For many recent years, soft robotic fingers were used for supporting the patients who had survived the neurological diseases that resulted in muscular disorders and neural network damages, such as stroke and Parkinson’s disease, and inflammatory symptoms such as De Quervain and trigger finger. Generally, the major hand function is significant to manipulate objects in activities of daily living (ADL). In this work, we proposed the model of soft actuator that manufactured by full 3D printing without the molding process and one material for use. Furthermore, we designed the model with a technique of multi cavitation cells (MC-Cells). Then, we demonstrated the curvature bending, fluidic pressure and force that generated to the model for assistive finger flexor and hand grasping. Also, the soft actuators were characterized in mathematics solving by the length of chord and arc length. In addition, we used an adaptive push-button switch machine to measure the force in our experiment. Consequently, we evaluated biomechanics efficiency by the range of motion (ROM) that affected to metacarpophalangeal joint (MCP), proximal interphalangeal joint (PIP) and distal interphalangeal joint (DIP). Finally, the model achieved to exhibit the corresponding fluidic pressure with force and ROM to assist the finger flexor and hand grasping.

Keywords: biomechanics efficiency, curvature bending, hand functional assistance, multi cavitation cells (MC-Cells), range of motion (ROM)

Procedia PDF Downloads 242
19570 An Aesthetic Spatial Turn - AI and Aesthetics in the Physical, Psychological, and Symbolic Spaces of Brand Advertising

Authors: Yu Chen

Abstract:

In line with existing philosophical approaches, this research proposes a conceptual model with an innovative spatial vision and aesthetic principles for Artificial Intelligence (AI) application in brand advertising. The model first identifies the major constituencies in contemporary advertising on three spatial levels—physical, psychological, and symbolic. The model further incorporates the relationships among AI, aesthetics, branding, and advertising and their interactions with the major actors in all spaces. It illustrates that AI may follow the aesthetic principles-- beauty, elegance, and simplicity-- to reinforce brand identity and consistency in advertising, to collaborate with stakeholders, and to satisfy different advertising objectives on each level. It proposes that, with aesthetic guidelines, AI may assist consumers to emerge into the physical, psychological, and symbolic advertising spaces and helps transcend the tangible advertising messages to meaningful brand symbols. Conceptually, the research illustrates that even though consumers’ engagement with brand mostly begins with physical advertising and later moves to psychological-symbolic, AI-assisted advertising should start with the understanding of brand symbolic-psychological and consumer aesthetic preferences before the physical design to better resonate. Limits of AI and future AI functions in advertising are discussed.

Keywords: AI, spatial, aesthetic, brand advertising

Procedia PDF Downloads 60