Search results for: outstanding engineers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 867

Search results for: outstanding engineers

147 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study

Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart

Abstract:

Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.

Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning

Procedia PDF Downloads 81
146 From Makers to Maker Communities: A Survey on Turkish Makerspaces

Authors: Dogan Can Hatunoglu, Cengiz Hakan Gurkanlı, Hatice Merve Demirci

Abstract:

Today, the maker movement is regarded as a socio-cultural movement that represents designing and building objects for innovations. In these creativity-based activities of the movement, individuals from different backgrounds such as; inventors, programmers, craftspeople, DIY’ers, tinkerers, engineers, designers, and hackers, form a community and work collaboratively for mutual, open-source innovations. Today, with the accessibility of recently emerged technologies and digital fabrication tools, the Maker Movement is continuously expanding its scope and has evolved into a new experience, and for many, it is now considered as new kind of industrial revolution. In this new experience, makers create new things within their community by using new digital tools and technologies in spots called makerspaces. In these makerspaces, activities of learning, experience sharing, and mentoring are evolved into maker events. Makers who share common interests in making benefit from makerspaces as meeting and working spots. In literature, there are many sources on Maker Movement, maker communities, and their activities, especially in the field of business administration. However, there is a gap in the literature about the maker communities in Turkey. This research aims to be an information source on the dynamics and process design of “making” activities in Turkish maker communities and also aims to provide insights to sustain and enhance local maker communities in the future. Within this aim, semi-structured interviews were conducted with founders and facilitators from selected Turkish maker communities. (1) The perception towards Maker Movement, makers, activity of making, and current situation of maker communities, (2) motivations of individuals who participate the maker communities, and (3) key drivers (collaboration and decision-making in design processes) of maker activities from the perspectives of main actors (founders, facilitators) are all examined deeply with question on personal experiences and perspectives. After a qualitative approached data analysis concerning the maker communities in Turkey, this research reveals that there are two main conclusions regarding (1) the foundation of the Turkish maker mindset and (2) emergence of self-sustaining communities.

Keywords: Maker Movement, maker community, makerspaces, open-source design, sustainability

Procedia PDF Downloads 130
145 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy

Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury

Abstract:

Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.

Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling

Procedia PDF Downloads 55
144 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET

Authors: Tyler T. Procko, Steve Collins

Abstract:

New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.

Keywords: API data access, database, JSON, .NET core, SQL server

Procedia PDF Downloads 52
143 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades

Authors: Mojtaba Eshraghi Arani

Abstract:

The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.

Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention

Procedia PDF Downloads 59
142 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 103
141 Energy Efficient Retrofitting and Optimization of Dual Mixed Refrigerant Natural Gas Liquefaction Process

Authors: Muhammad Abdul Qyyum, Kinza Qadeer, Moonyong Lee

Abstract:

Globally, liquefied natural gas (LNG) has drawn interest as a green energy source in comparison with other fossil fuels, mainly because of its ease of transport and low carbon dioxide emissions. It is expected that demand for LNG will grow steadily over the next few decades. In addition, because the demand for clean energy is increasing, LNG production facilities are expanding into new natural gas reserves across the globe. However, LNG production is an energy and cost intensive process because of the huge power requirements for compression and refrigeration. Therefore, one of the major challenges in the LNG industry is to improve the energy efficiency of existing LNG processes through economic and ecological strategies. The advancement in expansion devices such as two-phase cryogenic expander (TPE) and cryogenic hydraulic turbine (HT) were exploited for energy and cost benefits in natural gas liquefaction. Retrofitting the conventional Joule–Thompson (JT) valve with TPE and HT have the potential to improve the energy efficiency of LNG processes. This research investigated the potential feasibility of the retrofitting of a dual mixed refrigerant (DMR) process by replacing the isenthalpic expansion with isentropic expansion corresponding to energy efficient LNG production. To fully take the potential benefit of the proposed process retrofitting, the proposed DMR schemes were optimized by using a Coggins optimization approach, which was implemented in Microsoft Visual Studio (MVS) environment and linked to the rigorous HYSYS® model. The results showed that the required energy of the proposed isentropic expansion based DMR process could be saved up to 26.5% in comparison with the conventional isenthalpic based DMR process using the JT valves. Utilization of the recovered energy into boosting the natural gas feed pressure could further improve the energy efficiency of the LNG process up to 34% as compared to the base case. This work will help the process engineers to overcome the challenges relating to energy efficiency and safety concerns of LNG processes. Furthermore, the proposed retrofitting scheme can also be implemented to improve the energy efficiency of other isenthalpic expansion based energy intensive cryogenic processes.

Keywords: cryogenic liquid turbine, Coggins optimization, dual mixed refrigerant, energy efficient LNG process, two-phase expander

Procedia PDF Downloads 134
140 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry

Authors: S. McLean, J. A. Scott

Abstract:

The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.

Keywords: environment, heat recovery, mining engineering, sustainability

Procedia PDF Downloads 96
139 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria

Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar

Abstract:

The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.

Keywords: earthquake, modeling, liquefaction potential, laboratory investigations

Procedia PDF Downloads 343
138 Documentary Project as an Active Learning Strategy in a Developmental Psychology Course

Authors: Ozge Gurcanli

Abstract:

Recent studies in active-learning focus on how student experience varies based on the content (e.g. STEM versus Humanities) and the medium (e.g. in-class exercises versus off-campus activities) of experiential learning. However, little is known whether the variation in classroom time and space within the same active learning context affects student experience. This study manipulated the use of classroom time for the active learning component of a developmental psychology course that is offered at a four-year university in the South-West Region of United States. The course uses a blended model: traditional and active learning. In the traditional learning component of the course, students do weekly readings, listen to lectures, and take midterms. In the active learning component, students make a documentary on a developmental topic as a final project. Students used the classroom time and space for the documentary in two ways: regular classroom time slots that were dedicated to the making of the documentary outside without the supervision of the professor (Classroom-time Outside) and lectures that offered basic instructions about how to make a documentary (Documentary Lectures). The study used the public teaching evaluations that are administered by the Office of Registrar’s. A total of two hundred and seven student evaluations were available across six semesters. Because the Office of Registrar’s presented the data separately without personal identifiers, One-Way ANOVA with four groups (Traditional, Experiential-Heavy: 19% Classroom-time Outside, 12% for Documentary Lectures, Experiential-Moderate: 5-7% for Classroom-time Outside, 16-19% for Documentary Lectures, Experiential Light: 4-7% for Classroom-time Outside, 7% for Documentary Lectures) was conducted on five key features (Organization, Quality, Assignments Contribution, Intellectual Curiosity, Teaching Effectiveness). Each measure used a five-point reverse-coded scale (1-Outstanding, 5-Poor). For all experiential conditions, the documentary counted towards 30% of the final grade. Organization (‘The instructors preparation for class was’), Quality (’Overall, I would rate the quality of this course as’) and Assignment Contribution (’The contribution of the graded work that made to the learning experience was’) did not yield any significant differences across four course types (F (3, 202)=1.72, p > .05, F(3, 200)=.32, p > .05, F(3, 203)=.43, p > .05, respectively). Intellectual Curiosity (’The instructor’s ability to stimulate intellectual curiosity was’) yielded a marginal effect (F (3, 201)=2.61, p = .053). Tukey’s HSD (p < .05) indicated that the Experiential-Heavy (M = 1.94, SD = .82) condition was significantly different than all other three conditions (M =1.57, 1.51, 1.58; SD = .68, .66, .77, respectively) showing that heavily active class-time did not elicit intellectual curiosity as much as others. Finally, Teaching Effectiveness (’Overall, I feel that the instructor’s effectiveness as a teacher was’) was significant (F (3, 198)=3.32, p <.05). Tukey’s HSD (p <.05) showed that students found the courses with moderate (M=1.49, SD=.62) to light (M=1.52, SD=.70) active class-time more effective than heavily active class-time (M=1.93, SD=.69). Overall, the findings of this study suggest that within the same active learning context, the time and the space dedicated to active learning results in different outcomes in intellectual curiosity and teaching effectiveness.

Keywords: active learning, learning outcomes, student experience, learning context

Procedia PDF Downloads 169
137 Study of the Uncertainty Behaviour for the Specific Total Enthalpy of the Hypersonic Plasma Wind Tunnel Scirocco at Italian Aerospace Research Center

Authors: Adolfo Martucci, Iulian Mihai

Abstract:

By means of the expansion through a Conical Nozzle and the low pressure inside the Test Chamber, a large hypersonic stable flow takes place for a duration of up to 30 minutes. Downstream the Test Chamber, the diffuser has the function of reducing the flow velocity to subsonic values, and as a consequence, the temperature increases again. In order to cool down the flow, a heat exchanger is present at the end of the diffuser. The Vacuum System generates the necessary vacuum conditions for the correct hypersonic flow generation, and the DeNOx system, which follows the Vacuum System, reduces the nitrogen oxide concentrations created inside the plasma flow behind the limits imposed by Italian law. This very large, powerful, and complex facility allows researchers and engineers to reproduce entire re-entry trajectories of space vehicles into the atmosphere. One of the most important parameters for a hypersonic flowfield representative of re-entry conditions is the specific total enthalpy. This is the whole energy content of the fluid, and it represents how severe could be the conditions around a spacecraft re-entering from a space mission or, in our case, inside a hypersonic wind tunnel. It is possible to reach very high values of enthalpy (up to 45 MJ/kg) that, together with the large allowable size of the models, represent huge possibilities for making on-ground experiments regarding the atmospheric re-entry field. The maximum nozzle exit section diameter is 1950 mm, where values of Mach number very much higher than 1 can be reached. The specific total enthalpy is evaluated by means of a number of measurements, each of them concurring with its value and its uncertainty. The scope of the present paper is the evaluation of the sensibility of the uncertainty of the specific total enthalpy versus all the parameters and measurements involved. The sensors that, if improved, could give the highest advantages have so been individuated. Several simulations in Python with the METAS library and by means of Monte Carlo simulations are presented together with the obtained results and discussions about them.

Keywords: hypersonic, uncertainty, enthalpy, simulations

Procedia PDF Downloads 77
136 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming

Authors: David Muyise

Abstract:

Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.

Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing

Procedia PDF Downloads 113
135 Utilizing Fly Ash Cenosphere and Aerogel for Lightweight Thermal Insulating Cement-Based Composites

Authors: Asad Hanif, Pavithra Parthasarathy, Zongjin Li

Abstract:

Thermal insulating composites help to reduce the total power consumption in a building by creating a barrier between external and internal environment. Such composites can be used in the roofing tiles or wall panels for exterior surfaces. This study purposes to develop lightweight cement-based composites for thermal insulating applications. Waste materials like silica fume (an industrial by-product) and fly ash cenosphere (FAC) (hollow micro-spherical shells obtained as a waste residue from coal fired power plants) were used as partial replacement of cement and lightweight filler, respectively. Moreover, aerogel, a nano-porous material made of silica, was also used in different dosages for improved thermal insulating behavior, while poly vinyl alcohol (PVA) fibers were added for enhanced toughness. The raw materials including binders and fillers were characterized by X-Ray Diffraction (XRD), X-Ray Fluorescence spectroscopy (XRF), and Brunauer–Emmett–Teller (BET) analysis techniques in which various physical and chemical properties of the raw materials were evaluated like specific surface area, chemical composition (oxide form), and pore size distribution (if any). Ultra-lightweight cementitious composites were developed by varying the amounts of FAC and aerogel with 28-day unit weight ranging from 1551.28 kg/m3 to 1027.85 kg/m3. Excellent mechanical and thermal insulating properties of the resulting composites were obtained ranging from 53.62 MPa to 8.66 MPa compressive strength, 9.77 MPa to 3.98 MPa flexural strength, and 0.3025 W/m-K to 0.2009 W/m-K as thermal conductivity coefficient (QTM-500). The composites were also tested for peak temperature difference between outer and inner surfaces when subjected to heating (in a specially designed experimental set-up) by a 275W infrared lamp. The temperature difference up to 16.78 oC was achieved, which indicated outstanding properties of the developed composites to act as a thermal barrier for building envelopes. Microstructural studies were carried out by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) for characterizing the inner structure of the composite specimen. Also, the hydration products were quantified using the surface area mapping and line scale technique in EDS. The microstructural analyses indicated excellent bonding of FAC and aerogel in the cementitious system. Also, selective reactivity of FAC was ascertained from the SEM imagery where the partially consumed FAC shells were observed. All in all, the lightweight fillers, FAC, and aerogel helped to produce the lightweight composites due to their physical characteristics, while exceptional mechanical properties, owing to FAC partial reactivity, were achieved.

Keywords: aerogel, cement-based, composite, fly ash cenosphere, lightweight, sustainable development, thermal conductivity

Procedia PDF Downloads 209
134 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights

Authors: Wendy de Gomez

Abstract:

Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.

Keywords: patents, artifical intelligence, standards, F/Rand agreements

Procedia PDF Downloads 64
133 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 341
132 Determinants of Long Acting Reversible Contraception Utilization among Women (15-49) in Uganda: Analysis of 2016 PMA2020 Uganda Survey

Authors: Nulu Nanono

Abstract:

Background: The Ugandan national health policy and the national population policy all recognize the need to increase access to quality, affordable, acceptable and sustainable contraceptive services for all people but provision and utilization of quality services remains low. Two contraceptive methods are categorized as long-acting temporary methods: intrauterine contraceptive devices (IUCDs) and implants. Copper-containing IUCDs, generally available in Ministry of Health (MoH) family planning programs and is effective for at least 12 years while Implants, depending on the type, last for up to three to seven years. Uganda’s current policy and political environment are favorable towards achieving national access to quality and safe contraceptives for all people as evidenced by increasing government commitments and innovative family planning programs. Despite the increase of modern contraception use from 14% to 26%, long acting reversible contraceptive (LARC) utilization has relatively remained low with less than 5% using IUDs & Implants which in a way explains Uganda’s persistent high fertility rates. Main question/hypothesis: The purpose of the study was to examine relationship between the demographic, socio-economic characteristics of women, health facility factors and long acting reversible contraception utilization. Methodology: LARC utilization was investigated comprising of the two questions namely are you or your partner currently doing something or using any method to delay or avoid getting pregnant? And which method or methods are you using? Data for the study was sourced from the 2016 Uganda Performance Monitoring and Accountability 2020 Survey comprising of 3816 female respondents aged 15 to 49 years. The analysis was done using the Chi-squared tests and the probit regression at bivariate and multivariate levels respectively. The model was further tested for validity and normality of the residuals using the Sharipo wilks test and test for kurtosis and skewness. Results: The results showed the model the age, parity, marital status, region, knowledge of LARCs, availability of LARCs to be significantly associated with long acting contraceptive utilization with p value of less than 0.05. At the multivariate analysis level, women who had higher parities (0.000) tertiary education (0.013), no knowledge about LARCs (0.006) increases their probability of using LARCs. Furthermore while women age 45-49, those who live in the eastern region reduces their probability of using LARCs. Knowledge contribution: The findings of this study join the debate of prior research in this field and add to the body of knowledge related to long acting reversible contraception. An outstanding and queer finding from the study is the non-utilization of LARCs by women who are aware and have knowledge about them, this may be an opportunity for further research to investigate the attribution to this.

Keywords: contraception, long acting, utilization, women (15-49)

Procedia PDF Downloads 179
131 How Does Paradoxical Leadership Enhance Organizational Success?

Authors: Wageeh A. Nafei

Abstract:

This paper explores the role of Paradoxical Leadership (PL) in enhancing Organizational Success (OS) at private hospitals in Egypt. Based on the collected data from employees in private hospitals (doctors, nursing staff, and administrative staff). The researcher has adopted a sampling method to collect data for the study. The appropriate statistical methods, such as Alpha Correlation Coefficient (ACC), Confirmatory Factor Analysis (CFA), and Multiple Regression Analysis (MRA), are used to analyze the data and test the hypotheses. The research has reached a number of results, the most important of which are (1) there is a statistical relationship between the independent variable represented by PL and the dependent variable represented by Organizational Success (OS). The paradoxical leader encourages employees to express their opinions and builds a work environment characterized by flexibility and independence. Also, the paradoxical leader works to support specialized work teams, which leads to the creation of new ideas, on the one hand, and contributes to the achievement of outstanding performance on the other hand. (2) the mentality of the paradoxical leader is flexible and capable of absorbing all suggestions from all employees. Also, the paradoxical leader is interested in enhancing cooperation among them and provides an opportunity to transfer experience and increase knowledge-sharing. Also, the sharing of knowledge creates the necessary diversity that helps the organization to obtain rich external information and enables the organization to deal with a rapidly changing environment. (3) The PL approach helps in facing the paradoxical demands of employees. A paradoxical leader plays an important role in reducing the feeling of instability in the work environment and lack of job security, reducing negative feelings for employees, restoring balance in the work environment, improving the well-being of employees, and increasing the degree of job satisfaction of employees in the organization. The study referred to a number of recommendations, the most important of which are (1) the leaders of the organizations must listen to the views of employees and their needs and move away from the official method of control. The leader should give sufficient freedom to employees to participate in decision-making and maintain enough space among them. The treatment between the leaders and employees must be based on friendliness, (2) the need for organizational leaders to pay attention to sharing knowledge among employees through training courses. The leader should make sure that every information provided by the employee is valuable and useful, which can be used to solve a problem that may face his/her colleagues at work, (3) the need for organizational leaders to pay attention to sharing knowledge among employees through brainstorming sessions. The leader should ensure that employees obtain knowledge from their colleagues and share ideas and information among them. This is in addition to motivating employees to complete their work in a new creative way, which leads to employees’ not feeling bored of repeating the same routine procedures in the organization.

Keywords: paradoxical leadership, organizational success, human resourece, management

Procedia PDF Downloads 44
130 Hardness map of Human Tarsals, Meta Tarsals and Phalanges of Toes

Authors: Irfan Anjum Manarvi, Zahid Ali kaimkhani

Abstract:

Predicting location of the fracture in human bones has been a keen area of research for the past few decades. A variety of tests for hardness, deformation, and strain field measurement have been conducted in the past; but considered insufficient due to various limitations. Researchers, therefore, have proposed further studies due to inaccuracies in measurement methods, testing machines, and experimental errors. Advancement and availability of hardware, measuring instrumentation, and testing machines can now provide remedies to these limitations. The human foot is a critical part of the body exposed to various forces throughout its life. A number of products are developed for using it for protection and care, which many times do not provide sufficient protection and may itself become a source of stress due to non-consideration of the delicacy of bones in the feet. A continuous strain or overloading on feet may occur resulting to discomfort and even fracture. Mechanical properties of Tarsals, Metatarsals, and phalanges are, therefore, the primary area of consideration for all such design applications. Hardness is one of the mechanical properties which are considered very important to establish the mechanical resistance behavior of a material against applied loads. Past researchers have worked in the areas of investigating mechanical properties of these bones. However, their results were based on a limited number of experiments and taking average values of hardness due to either limitation of samples or testing instruments. Therefore, they proposed further studies in this area. The present research has been carried out to develop a hardness map of the human foot by measuring micro hardness at various locations of these bones. Results are compiled in the form of distance from a reference point on a bone and the hardness values for each surface. The number of test results is far more than previous studies and are spread over a typical bone to give a complete hardness map of these bones. These results could also be used to establish other properties such as stress and strain distribution in the bones. Also, industrial engineers could use it for design and development of various accessories for human feet health care and comfort and further research in the same areas.

Keywords: tarsals, metatarsals, phalanges, hardness testing, biomechanics of human foot

Procedia PDF Downloads 410
129 An Alternative Rectangular Tunnels to Conventional Twin Circular Bored Tunnels in Weak Ground Conditions

Authors: Alex Atanaw Alebachew

Abstract:

The outcomes of a numerical research study conducted using the PLAXIS software to analyze surface settlements and moments generated in tunnel linings. The investigation focuses on both circular and rectangular twin tunnels. The study suggests that rectangular tunnels, although considered unconventional in modern tunneling practices, may be a viable option for shallow-depth tunneling in weak ground. The recommendation for engineers in the tunneling industry is to consider the use of rectangular tunnel boring machines (TBMs) based on the findings of this analysis. The research emphasizes the importance of evaluating various tunneling methods to optimize performance and address specific challenges in different ground conditions. These findings provide valuable insights into the behavior of rectangular tunnels compared to circular tunnels, emphasizing factors such as burial depth, relative positioning, tunnel size, and critical distance that influence surface settlements and bending moments. This research explores the feasibility of utilizing rectangular Tunnel Boring Machines (TBMs) as an alternative to conventional circular TBMs. The research findings indicate that rectangular tunnels exhibit slightly lower settlement than circular tunnels at shallow depths, especially in a narrower range directly above the twin tunnels. This difference could be attributed to maintaining a consistent tunnel-lining thickness across all depths. In deeper tunnel scenarios, circular tunnels experience less settlement compared to rectangular tunnels. Additionally, parallel rectangular tunnels settle more gradually than piggyback configurations, while piggyback tunnels show increased moments in the tunnel built second at the same level. Both settlement and moment coefficients increase with the diameter of twin tunnels, irrespective of their shape. The critical distance for both circular and rectangular tunnels is around 2.5 times the tunnel diameter, and distances closer than this result in a notable increase in moments. Rectangular tunnels spaced closer than 5 times the diameter led to higher settlement, and circular tunnels spaced closer than 2.5 to 3 times the diameter experience increased settlement as well.

Keywords: alternative, rectangular, tunnel, twin bored circular, weak ground

Procedia PDF Downloads 41
128 Defining Unconventional Hydrocarbon Parameter Using Shale Play Concept

Authors: Rudi Ryacudu, Edi Artono, Gema Wahyudi Purnama

Abstract:

Oil and gas consumption in Indonesia is currently on the rise due to its nation economic improvement. Unfortunately, Indonesia’s domestic oil production cannot meet it’s own consumption and Indonesia has lost its status as Oil and Gas exporter. Even worse, our conventional oil and gas reserve is declining. Unwilling to give up, the government of Indonesia has taken measures to invite investors to invest in domestic oil and gas exploration to find new potential reserve and ultimately increase production. Yet, it has not bear any fruit. Indonesia has taken steps now to explore new unconventional oil and gas play including Shale Gas, Shale Oil and Tight Sands to increase domestic production. These new plays require definite parameters to differentiate each concept. The purpose of this paper is to provide ways in defining unconventional hydrocarbon reservoir parameters in Shale Gas, Shale Oil and Tight Sands. The parameters would serve as an initial baseline for users to perform analysis of unconventional hydrocarbon plays. Some of the on going concerns or question to be answered in regards to unconventional hydrocarbon plays includes: 1. The TOC number, 2. Has it been well “cooked” and become a hydrocarbon, 3. What are the permeability and the porosity values, 4. Does it need a stimulation, 5. Does it has pores, and 6. Does it have sufficient thickness. In contrast with the common oil and gas conventional play, Shale Play assumes that hydrocarbon is retained and trapped in area with very low permeability. In most places in Indonesia, hydrocarbon migrates from source rock to reservoir. From this case, we could derive a theory that Kitchen and Source Rock are located right below the reservoir. It is the starting point for user or engineer to construct basin definition in relation with the tectonic play and depositional environment. Shale Play concept requires definition of characteristic, description and reservoir identification to discover reservoir that is technically and economically possible to develop. These are the steps users and engineers has to do to perform Shale Play: a. Calculate TOC and perform mineralogy analysis using water saturation and porosity value. b. Reconstruct basin that accumulate hydrocarbon c. Brittlenes Index calculated form petrophysical and distributed based on seismic multi attributes d. Integrated natural fracture analysis e. Best location to place a well.

Keywords: unconventional hydrocarbon, shale gas, shale oil tight sand reservoir parameters, shale play

Procedia PDF Downloads 387
127 Creating a Rehabilitation Product as an Example of Design Management

Authors: K. Caban-Piaskowska

Abstract:

The aim of the article is to show how the role of a designer has changed, from the point of view of human resources management and thanks to the increased importance of design management, and is to present how a rehabilitation product, through technology approach to designing, becomes a universal product. Designing for the disabled is a very undiscovered area on the pattern-designing market, most often because it is associated with devices which support rehabilitation. In consequence, it means that the realizations have a limited group of receivers and are not that attractive for designers. The relation between using modern design in building rehabilitation devices and increasing the efficiency of treatment and physiotherapy. Using modern technology can have marketing significance. Rehabilitation products designed and produced in a modern way makes an impression that experts and professionals are involved in the lives of the user – patient. In order to illustrate the problem presented above i.e. Creating a rehabilitation product as an example of design management, the case study method was used in the research. The analysis of the case was created on the basis of an interview conducted by the author with a designer who took part in meetings with people who use rehabilitation and their physiotherapists, and created universal products in Poland in the years of 2012 to 2017. Usually, engineers and constructors deal with creating products which remind us of old torture devices, however, they are indestructible in construction. Such image of those products for the disabled clearly indicates that it is a wonderful niche for designers and emphasizes the need to make those products more attractive and innovative. Products for the disabled cannot be limited to rehabilitation equipment only e.g. wheelchairs or standing frames. Introducing the idea of universal designing can significantly broaden the circle of pattern-designing receivers – everyday-use items – with the disabled people. Fulfilling these criteria will decide about the advantage on the competitive market. It is possible due to the usage of the design management concept in the functioning of an organization. Using modern technology and materials in the production of equipment, and changing the role of a designer broadening the circle of receivers by designing a wide use process which makes it possible to use the product by people with various needs. What is more, introducing rehabilitation functions in everyday-use items can also become an innovative accent in designing. In the reality of the market, each group of users can and should be treated as a problem and a realization task.

Keywords: design management, innovation, rehabilitation product, universal product

Procedia PDF Downloads 176
126 Impact of Civil Engineering and Economic Growth in the Sustainability of the Environment: Case of Albania

Authors: Rigers Dodaj

Abstract:

Nowadays, the environment is a critical goal for civil engineers, human activity, construction projects, economic growth, and whole national development. Regarding the development of Albania's economy, people's living standards are increasing, and the requirements for the living environment are also increasing. Under these circumstances, environmental protection and sustainability this is the critical issue. The rising industrialization, urbanization, and energy demand affect the environment by emission of carbon dioxide gas (CO2), a significant parameter known to impact air pollution directly. Consequently, many governments and international organizations conducted policies and regulations to address environmental degradation in the pursuit of economic development, for instance in Albania, the CO2 emission calculated in metric tons per capita has increased by 23% in the last 20 years. This paper analyzes the importance of civil engineering and economic growth in the sustainability of the environment focusing on CO2 emission. The analyzed data are time series 2001 - 2020 (with annual frequency), based on official publications of the World Bank. The statistical approach with vector error correction model and time series forecasting model are used to perform the parameter’s estimations and long-run equilibrium. The research in this paper adds a new perspective to the evaluation of a sustainable environment in the context of carbon emission reduction. Also, it provides reference and technical support for the government toward green and sustainable environmental policies. In the context of low-carbon development, effectively improving carbon emission efficiency is an inevitable requirement for achieving sustainable economic and environmental protection. Also, the study reveals that civil engineering development projects impact greatly the environment in the long run, especially in areas of flooding, noise pollution, water pollution, erosion, ecological disorder, natural hazards, etc. The potential for reducing industrial carbon emissions in recent years indicates that reduction is becoming more difficult, it needs another economic growth policy and more civil engineering development, by improving the level of industrialization and promoting technological innovation in industrial low-carbonization.

Keywords: CO₂ emission, civil engineering, economic growth, environmental sustainability

Procedia PDF Downloads 64
125 Web Map Service for Fragmentary Rockfall Inventory

Authors: M. Amparo Nunez-Andres, Nieves Lantada

Abstract:

One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.

Keywords: geological risk, web mapping, WMS, rockfalls

Procedia PDF Downloads 146
124 Assessing Project Performance through Work Sampling and Earned Value Analysis

Authors: Shobha Ramalingam

Abstract:

The majority of the infrastructure projects are affected by time overrun, resulting in project delays and subsequently cost overruns. Time overrun may vary from a few months to as high as five or more years, placing the project viability at risk. One of the probable reasons noted in the literature for this outcome in projects is due to poor productivity. Researchers contend that productivity in construction has only marginally increased over the years. While studies in the literature have extensively focused on time and cost parameters in projects, there are limited studies that integrate time and cost with productivity to assess project performance. To this end, a study was conducted to understand the project delay factors concerning cost, time and productivity. A case-study approach was adopted to collect rich data from a nuclear power plant project site for two months through observation, interviews and document review. The data were analyzed using three different approaches for a comprehensive understanding. Foremost, a root-cause analysis was performed on the data using Ishikawa’s fish-bone diagram technique to identify the various factors impacting the delay concerning time. Based on it, a questionnaire was designed and circulated to concerned executives, including project engineers and contractors to determine the frequency of occurrence of the delay, which was then compiled and presented to the management for a possible solution to mitigate. Second, a productivity analysis was performed on select activities, including rebar bending and concreting through a time-motion study to analyze product performance. Third, data on cost of construction for three years allowed analyzing the cost performance using earned value management technique. All three techniques allowed to systematically and comprehensively identify the key factors that deter project performance and productivity loss in the construction of the nuclear power plant project. The findings showed that improper planning and coordination between multiple trades, concurrent operations, improper workforce and material management, fatigue due to overtime were some of the key factors that led to delays and poor productivity. The findings are expected to act as a stepping stone for further research and have implications for practitioners.

Keywords: earned value analysis, time performance, project costs, project delays, construction productivity

Procedia PDF Downloads 85
123 Sustainable Production of Pharmaceutical Compounds Using Plant Cell Culture

Authors: David A. Ullisch, Yantree D. Sankar-Thomas, Stefan Wilke, Thomas Selge, Matthias Pump, Thomas Leibold, Kai Schütte, Gilbert Gorr

Abstract:

Plants have been considered as a source of natural substances for ages. Secondary metabolites from plants are utilized especially in medical applications but are more and more interesting as cosmetical ingredients and in the field of nutraceuticals. However, supply of compounds from natural harvest can be limited by numerous factors i.e. endangered species, low product content, climate impacts and cost intensive extraction. Especially in the pharmaceutical industry the ability to provide sufficient amounts of product and high quality are additional requirements which in some cases are difficult to fulfill by plant harvest. Whereas in many cases the complexity of secondary metabolites precludes chemical synthesis on a reasonable commercial basis, plant cells contain the biosynthetic pathway – a natural chemical factory – for a given compound. A promising approach for the sustainable production of natural products can be plant cell fermentation (PCF®). A thoroughly accomplished development process comprises the identification of a high producing cell line, optimization of growth and production conditions, the development of a robust and reliable production process and its scale-up. In order to address persistent, long lasting production, development of cryopreservation protocols and generation of working cell banks is another important requirement to be considered. So far the most prominent example using a PCF® process is the production of the anticancer compound paclitaxel. To demonstrate the power of plant suspension cultures here we present three case studies: 1) For more than 17 years Phyton produces paclitaxel at industrial scale i.e. up to 75,000 L in scale. With 60 g/kg dw this fully controlled process which is applied according to GMP results in outstanding high yields. 2) Thapsigargin is another anticancer compound which is currently isolated from seeds of Thapsia garganica. Thapsigargin is a powerful cytotoxin – a SERCA inhibitor – and the precursor for the derivative ADT, the key ingredient of the investigational prodrug Mipsagargin (G-202) which is in several clinical trials. Phyton successfully generated plant cell lines capable to express this compound. Here we present data about the screening for high producing cell lines. 3) The third case study covers ingenol-3-mebutate. This compound is found in the milky sap of the intact plants of the Euphorbiacae family at very low concentrations. Ingenol-3-mebutate is used in Picato® which is approved against actinic keratosis. Generation of cell lines expressing significant amounts of ingenol-3-mebutate is another example underlining the strength of plant cell culture. The authors gratefully acknowledge Inspyr Therapeutics for funding.

Keywords: Ingenol-3-mebutate, plant cell culture, sustainability, thapsigargin

Procedia PDF Downloads 231
122 Guests’ Satisfaction and Intention to Revisit Smart Hotels: Qualitative Interviews Approach

Authors: Raymond Chi Fai Si Tou, Jacey Ja Young Choe, Amy Siu Ian So

Abstract:

Smart hotels can be defined as the hotel which has an intelligent system, through digitalization and networking which achieve hotel management and service information. In addition, smart hotels include high-end designs that integrate information and communication technology with hotel management fulfilling the guests’ needs and improving the quality, efficiency and satisfaction of hotel management. The purpose of this study is to identify appropriate factors that may influence guests’ satisfaction and intention to revisit Smart Hotels based on service quality measurement of lodging quality index and extended UTAUT theory. Unified Theory of Acceptance and Use of Technology (UTAUT) is adopted as a framework to explain technology acceptance and use. Since smart hotels are technology-based infrastructure hotels, UTATU theory could be as the theoretical background to examine the guests’ acceptance and use after staying in smart hotels. The UTAUT identifies four key drivers of the adoption of information systems: performance expectancy, effort expectancy, social influence, and facilitating conditions. The extended UTAUT modifies the definitions of the seven constructs for consideration; the four previously cited constructs of the UTAUT model together with three new additional constructs, which including hedonic motivation, price value and habit. Thus, the seven constructs from the extended UTAUT theory could be adopted to understand their intention to revisit smart hotels. The service quality model will also be adopted and integrated into the framework to understand the guests’ intention of smart hotels. There are rare studies to examine the service quality on guests’ satisfaction and intention to revisit in smart hotels. In this study, Lodging Quality Index (LQI) will be adopted to measure the service quality in smart hotels. Using integrated UTAUT theory and service quality model because technological applications and services require using more than one model to understand the complicated situation for customers’ acceptance of new technology. Moreover, an integrated model could provide more perspective insights to explain the relationships of the constructs that could not be obtained from only one model. For this research, ten in-depth interviews are planned to recruit this study. In order to confirm the applicability of the proposed framework and gain an overview of the guest experience of smart hotels from the hospitality industry, in-depth interviews with the hotel guests and industry practitioners will be accomplished. In terms of the theoretical contribution, it predicts that the integrated models from the UTAUT theory and the service quality will provide new insights to understand factors that influence the guests’ satisfaction and intention to revisit smart hotels. After this study identifies influential factors, smart hotel practitioners could understand which factors may significantly influence smart hotel guests’ satisfaction and intention to revisit. In addition, smart hotel practitioners could also provide outstanding guests experience by improving their service quality based on the identified dimensions from the service quality measurement. Thus, it will be beneficial to the sustainability of the smart hotels business.

Keywords: intention to revisit, guest satisfaction, qualitative interviews, smart hotels

Procedia PDF Downloads 189
121 Sustainable Solutions for Urban Problems: Industrial Container Housing for Endangered Communities in Maranhao, Brazil

Authors: Helida Thays Gomes Soares, Conceicao De Maria Pinheiro Correia, Fabiano Maciel Soares, Kleymer Silva

Abstract:

There is great discussion around populational increase in urban areas of the global south, and, consequently, the growth of inappropriate housing and the different ways humans have found to solve housing problems around the world. Sao Luís, the capital of the state of Maranhao is a good example. The 1.6 million inhabitant metropole is a colonial tropical city that shelters 22% of the population of Maranhão, brazilian state that still carries the scars of slavery in past centuries. In 2016, Brazilian Institute of Geography and Statistic found that 20% of Maranhão’s inhabitants were living in houses with external walls made of non-durable materials, like recycled wood, cardboard or soil. Out of this problematic, this study aims to propose interventions not only in the physical structure of irregular housing, but also to serve as a guide to intervene in the way eco-friendly, communitarian housing is seen by extreme poor zones inside metropolitan regions around big cities in the global south. The adaptation and reuse of industrial containers from the Harbor of Itaqui for housing is also an aim of the project. The great volume of discarded industrial containers may be an opportunity to solve housing deficit in the city. That way, through field research in São Luís’ neighborhoods mostly occupied by inappropriate housing, the study intends to raise ethnographical and physical values that help to shape new uses of industrial containers and recycled building materials, bringing the community into the process of shaping new-housing for local housing programs, changing the mindset of a concrete/brick model of building. The study used a general feasibility analysis of local engineers regarding strength of the locally used container for construction purposes, and also researched in-loco the current impressions of risky areas inhabitants of housing, traditional housing and the role they played as city shapers, evaluating their perceptions of what means to live and how their houses represent their personality.

Keywords: container housing, civil construction, housing deficit, participatory design, sustainability

Procedia PDF Downloads 176
120 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction

Procedia PDF Downloads 63
119 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells

Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos

Abstract:

One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.

Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst

Procedia PDF Downloads 216
118 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles

Authors: K. Verma, v. S. Moholkar

Abstract:

In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.

Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation

Procedia PDF Downloads 45