Search results for: traditional products
866 AI-Based Information System for Hygiene and Safety Management of Shared Kitchens
Authors: Jongtae Rhee, Sangkwon Han, Seungbin Ji, Junhyeong Park, Byeonghun Kim, Taekyung Kim, Byeonghyeon Jeon, Jiwoo Yang
Abstract:
The shared kitchen is a concept that transfers the value of the sharing economy to the kitchen. It is a type of kitchen equipped with cooking facilities that allows multiple companies or chefs to share time and space and use it jointly. These shared kitchens provide economic benefits and convenience, such as reduced investment costs and rent, but also increase the risk of safety management, such as cross-contamination of food ingredients. Therefore, to manage the safety of food ingredients and finished products in a shared kitchen where several entities jointly use the kitchen and handle various types of food ingredients, it is critical to manage followings: the freshness of food ingredients, user hygiene and safety and cross-contamination of cooking equipment and facilities. In this study, it propose a machine learning-based system for hygiene safety and cross-contamination management, which are highly difficult to manage. User clothing management and user access management, which are most relevant to the hygiene and safety of shared kitchens, are solved through machine learning-based methodology, and cutting board usage management, which is most relevant to cross-contamination management, is implemented as an integrated safety management system based on artificial intelligence. First, to prevent cross-contamination of food ingredients, we use images collected through a real-time camera to determine whether the food ingredients match a given cutting board based on a real-time object detection model, YOLO v7. To manage the hygiene of user clothing, we use a camera-based facial recognition model to recognize the user, and real-time object detection model to determine whether a sanitary hat and mask are worn. In addition, to manage access for users qualified to enter the shared kitchen, we utilize machine learning based signature recognition module. By comparing the pairwise distance between the contract signature and the signature at the time of entrance to the shared kitchen, access permission is determined through a pre-trained signature verification model. These machine learning-based safety management tasks are integrated into a single information system, and each result is managed in an integrated database. Through this, users are warned of safety dangers through the tablet PC installed in the shared kitchen, and managers can track the cause of the sanitary and safety accidents. As a result of system integration analysis, real-time safety management services can be continuously provided by artificial intelligence, and machine learning-based methodologies are used for integrated safety management of shared kitchens that allows dynamic contracts among various users. By solving this problem, we were able to secure the feasibility and safety of the shared kitchen business.Keywords: artificial intelligence, food safety, information system, safety management, shared kitchen
Procedia PDF Downloads 69865 Assessing Acute Toxicity and Endocrine Disruption Potential of Selected Packages Internal Layers Extracts
Authors: N. Szczepanska, B. Kudlak, G. Yotova, S. Tsakovski, J. Namiesnik
Abstract:
In the scientific literature related to the widely understood issue of packaging materials designed to have contact with food (food contact materials), there is much information on raw materials used for their production, as well as their physiochemical properties, types, and parameters. However, not much attention is given to the issues concerning migration of toxic substances from packaging and its actual influence on the health of the final consumer, even though health protection and food safety are the priority tasks. The goal of this study was to estimate the impact of particular foodstuff packaging type, food production, and storage conditions on the degree of leaching of potentially toxic compounds and endocrine disruptors to foodstuffs using the acute toxicity test Microtox and XenoScreen YES YAS assay. The selected foodstuff packaging materials were metal cans used for fish storage and tetrapak. Five stimulants respectful to specific kinds of food were chosen in order to assess global migration: distilled water for aqueous foods with a pH above 4.5; acetic acid at 3% in distilled water for acidic aqueous food with pH below 4.5; ethanol at 5% for any food that may contain alcohol; dimethyl sulfoxide (DMSO) and artificial saliva were used in regard to the possibility of using it as an simulation medium. For each packaging three independent variables (temperature and contact time) factorial design simulant was performed. Xenobiotics migration from epoxy resins was studied at three different temperatures (25°C, 65°C, and 121°C) and extraction time of 12h, 48h and 2 weeks. Such experimental design leads to 9 experiments for each food simulant as conditions for each experiment are obtained by combination of temperature and contact time levels. Each experiment was run in triplicate for acute toxicity and in duplicate for estrogen disruption potential determination. Multi-factor analysis of variation (MANOVA) was used to evaluate the effects of the three main factors solvent, temperature (temperature regime for cup), contact time and their interactions on the respected dependent variable (acute toxicity or estrogen disruption potential). From all stimulants studied the most toxic were can and tetrapak lining acetic acid extracts that are indication for significant migration of toxic compounds. This migration increased with increase of contact time and temperature and justified the hypothesis that food products with low pH values cause significant damage internal resin filling. Can lining extracts of all simulation medias excluding distilled water and artificial saliva proved to contain androgen agonists even at 25°C and extraction time of 12h. For tetrapak extracts significant endocrine potential for acetic acid, DMSO and saliva were detected.Keywords: food packaging, extraction, migration, toxicity, biotest
Procedia PDF Downloads 181864 Predicting Long-Term Performance of Concrete under Sulfate Attack
Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki
Abstract:
Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC
Procedia PDF Downloads 163863 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 297862 Cai Guo-Qiang: A Chinese Artist at the Cutting-Edge of Global Art
Authors: Marta Blavia
Abstract:
Magiciens de la terre, organized in 1989 by the Centre Pompidou, became 'the first worldwide exhibition of contemporary art' by presenting artists from Western and non-Western countries, including three Chinese artists. For the first time, West turned its eyes to other countries not as exotic sources of inspiration, but as places where contemporary art was also being created. One year later, Chine: demain pour hier was inaugurated as the first Chinese avant-garde group-exhibition in Occident. Among the artists included was Cai Guo-Qiang who, like many other Chinese artists, had left his home country in the eighties in pursuit of greater creative freedom. By exploring artistic non-Western perspectives, both landmark exhibitions questioned the predominance of the Eurocentric vision in the construction of history art. But more than anything else, these exhibitions laid the groundwork for the rise of the so-called phenomenon 'global contemporary art'. All the same time, 1989 also was a turning point in Chinese art history. Because of the Tiananmen student protests, The Chinese government undertook a series of measures to cut down any kind of avant-garde artistic activity after a decade of a relative openness. During the eighties, and especially after the Tiananmen crackdown, some important artists began to leave China to move overseas such as Xu Bing and Ai Weiwei (USA); Chen Zhen and Huang Yong Ping (France); or Cai Guo-Qiang (Japan). After emigrating abroad, Chinese overseas artists began to develop projects in accordance with their new environments and audiences as well as to appear in numerous international exhibitions. With their creations, that moved freely between a variety of Eastern and Western art sources, these artists were crucial agents in the emergence of global contemporary art. As other Chinese artists overseas, Cai Guo-Qiang’s career took off during the 1990s and early 2000s right at the same moment in which Western art world started to look beyond itself. Little by little, he developed a very personal artistic language that redefines Chinese ideas, symbols, and traditional materials in a new world order marked by globalization. Cai Guo-Qiang participated in many of the exhibitions that contributed to shape global contemporary art: Encountering the Others (1992); the 45th Venice Biennale (1993); Inside Out: New Chinese Art (1997), or the 48th Venice Biennale (1999), where he recreated the Chinese monumental social realist work Rent Collection Courtyard that earned him the Golden Lion Award. By examining the different stages of Cai Guo-Qiang’s artistic path as well as the transnational dimensions of his creations, this paper aims at offering a comprehensive survey on the construction of the discourse of global contemporary art.Keywords: Cai Guo-Qiang, Chinese artists overseas, emergence global art, transnational art
Procedia PDF Downloads 284861 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami
Authors: Alejandra G. Barbón, José Vila, Dania Vazquez
Abstract:
This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience
Procedia PDF Downloads 74860 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 312859 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 55858 Pre- and Post-Brexit Experiences of the Bulgarian Working Class Migrants: Qualitative and Quantitative Approaches
Authors: Mariyan Tomov
Abstract:
Bulgarian working class immigrants are increasingly concerned with UK’s recent immigration policies in the context of Brexit. The new ID system would exclude many people currently working in Britain and would break the usual immigrant travel patterns. Post-Brexit Britain would aim to repeal seasonal immigrants. Measures for keeping long-term and life-long immigrants have been implemented and migrants that aim to remain in Britain and establish a household there would be more privileged than temporary or seasonal workers. The results of such regulating mechanisms come at the expense of migrants’ longings for a ‘normal’ existence, especially for those coming from Central and Eastern Europe. Based on in-depth interviews with Bulgarian working class immigrants, the study found out that their major concerns following the decision of the UK to leave the EU are related with the freedom to travel, reside and work in the UK. Furthermore, many of the interviewed women are concerned that they could lose some of the EU's fundamental rights, such as maternity and protection of pregnant women from unlawful dismissal. The soar of commodity prices and university fees and the limited access to public services, healthcare and social benefits in the UK, are also subject to discussion in the paper. The most serious problem, according to the interview, is that the attitude towards Bulgarians and other immigrants in the UK is deteriorating. Both traditional and social media in the UK often portray the migrants negatively by claiming that they take British job positions while simultaneously abuse the welfare system. As a result, the Bulgarian migrants often face social exclusion, which might have negative influence on their health and welfare. In this sense, some of the interviewed stress on the fact that the most important changes after Brexit must take place in British society itself. The aim of the proposed study is to provide a better understanding of the Bulgarian migrants’ economic, health and sociocultural experience in the context of Brexit. Methodologically, the proposed paper leans on: 1. Analysing ethnographic materials dedicated to the pre- and post-migratory experiences of Bulgarian working class migrants, using SPSS. 2. Semi-structured interviews are conducted with more than 50 Bulgarian working class migrants [N > 50] in the UK, between 18 and 65 years. The communication with the interviewees was possible via Viber/Skype or face-to-face interaction. 3. The analysis is guided by theoretical frameworks. The paper has been developed within the framework of the research projects of the National Scientific Fund of Bulgaria: DCOST 01/25-20.02.2017 supporting COST Action CA16111 ‘International Ethnic and Immigrant Minorities Survey Data Network’.Keywords: Bulgarian migrants in UK, economic experiences, sociocultural experiences, Brexit
Procedia PDF Downloads 127857 The Role of the Board of Directors and Chief Executive Officers in Leading and Embedding Corporate Social Responsibility within Corporate Governance Regulations
Authors: Khalid Alshaikh
Abstract:
In recent years, leadership, Corporate Governance (CG) and Corporate Social Responsibility (CSR) have been under scrutiny in the Libyan society. Scholars and institutions have commenced investigating the possible resolutions they can arrange to alleviate the economic, social and environmental problems the war has produced. Thus far, these constructs requisite an in-depth reinvestigation, reconceptualization, and analysis to clearly reconstruct their rules and regulations. With the demise of Qaddafi’s regime, levels, degrees, and efforts to apply CG regulations have varied in public and private commercial banks. CSR is a new organizational culture that still designs its route within these financial institutions. Detaching itself from any notion of dictatorship and autocratic traits, leadership counts on transformational and transactional styles. Therefore, this paper investigates the extent to which the Board of Directors and Chief Executive Officers (CEOs) redefine these concepts and how they entrench CSR within the framework of CG. The research methodology used both public and private banks as a case study and qualitative research to interview ten Board of Directors (BoDs) and eleven Chief executive managers to explore how leadership, CG, and CSR are defined and how leadership integrates CSR into CG structures. The findings suggest that the CG framework in Libya still requires great efforts to be developed. Full CG code implementation appears daunting. Also, the CSR is still influenced by the power of religion. Nevertheless, the Islamic perspective is more consistent with the social contract concept of the CSR. The Libyan commercial banks do not solely focus on the economic side of maximizing profits, but also concentrate on its morality. The issue is that CSR activities are not enough to achieve good charity publicly and needs strategies to address major social issues. Moreover, leadership is more transformational and transactional and endeavors to make economic, social and environmental changes, but these changes are curtailed by tradition and traditional values dominating the Libyan social life where religious and tribal practices establish the relationship between leaders and their subordinates. Finally, the findings reveal that transformational and transactional leadership styles encourage the incorporation of CSR into the CG regulations. The boardroom and executive management have such a particular role in flagging up how embedded corporate Social responsibility is in organizational culture across the commercial banks, yet it is still important that the BoDs and CEOs need to do much more to embed corporate social responsibility through their core functions. They need to boost their standing to be more influential and make sure that the right discussions about CSR happen with the right stakeholders involved.Keywords: board of directors, chief executive officers, corporate governance, corporate social responsibility
Procedia PDF Downloads 171856 Introducing Information and Communication Technologies in Prison: A Proposal in Favor of Social Reintegration
Authors: Carmen Rocio Fernandez Diaz
Abstract:
This paper focuses on the relevance of information and communication technologies (hereinafter referred as ‘ICTs’) as an essential part of the day-to-day life of all societies nowadays, as they offer the scenario where an immense number of behaviors are performed that previously took place in the physical world. In this context, areas of reality that have remained outside the so-called ‘information society’ are hardly imaginable. Nevertheless, it is possible to identify a means that continue to be behind this reality, and it is the penitentiary area regarding inmates rights, as security aspects in prison have already be improved by new technologies. Introducing ICTs in prisons is still a matter subject to great rejections. The study of comparative penitentiary systems worldwide shows that most of them use ICTs only regarding educational aspects of life in prison and that communications with the outside world are generally based on traditional ways. These are only two examples of the huge range of activities where ICTs can carry positive results within the prison. Those positive results have to do with the social reintegration of persons serving a prison sentence. Deprivation of liberty entails contact with the prison subculture and the harmful effects of it, causing in cases of long-term sentences the so-called phenomenon of ‘prisonization’. This negative effect of imprisonment could be reduced if ICTs were used inside prisons in the different areas where they can have an impact, and which are treated in this research, as (1) access to information and culture, (2) basic and advanced training, (3) employment, (4) communication with the outside world, (5) treatment or (6) leisure and entertainment. The content of all of these areas could be improved if ICTs were introduced in prison, as it is shown by the experience of some prisons of Belgium, United Kingdom or The United States. However, rejections to introducing ICTs in prisons obey to the fact that it could carry also risks concerning security and the commission of new offences. Considering these risks, the scope of this paper is to offer a real proposal to introduce ICTs in prison, trying to avoid those risks. This enterprise would be done to take advantage of the possibilities that ICTs offer to all inmates in order to start to build a life outside which is far from delinquency, but mainly to those inmates who are close to release. Reforming prisons in this sense is considered by the author of this paper an opportunity to offer inmates a progressive resettlement to live in freedom with a higher possibility to obey the law and to escape from recidivism. The value that new technologies would add to education, employment, communications or treatment to a person deprived of liberty constitutes a way of humanization of prisons in the 21st century.Keywords: deprivation of freedom, information and communication technologies, imprisonment, social reintegration
Procedia PDF Downloads 165855 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 232854 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 124853 Cassava Plant Architecture: Insights from Genome-Wide Association Studies
Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi
Abstract:
Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.Keywords: Manihot esculenta Crantz, plant architecture, DArtseq, SNP markers, genome-wide association study
Procedia PDF Downloads 69852 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling
Authors: S. Agarwal, A. Rani
Abstract:
Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters
Procedia PDF Downloads 125851 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 128850 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk
Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda
Abstract:
Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.Keywords: cheese fraud, milk, pasteurisation, TD-NMR
Procedia PDF Downloads 242849 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 129848 Prototyping Exercise for the Construction of an Ancestral Violentometer in Buenaventura, Valle Del Cauca
Authors: Mariana Calderón, Paola Montenegro, Diana Moreno
Abstract:
Through this study, it was possible to identify the different levels and types of violence, both individual and collective, experienced by women, girls, and the sexually diverse population of Buenaventura translated from the different tensions and threats against ancestrality and accounting for a social and political context of violence related to race and geopolitical location. These threats are related to: the stigma and oblivion imposed on practices and knowledge; the imposition of the hegemonic culture; the imposition of external customs as a way of erasing ancestrality; the singling out and persecution of those who practice it; the violence that the health system has exercised against ancestral knowledge and practices, especially in the case of midwives; the persecution of the Catholic religion against this knowledge and practices; the difficulties in maintaining the practices in the displacement from rural to urban areas; the use and control of ancestral knowledge and practices by the armed actors; the rejection and stigma exercised by the public forces; and finally, the murder of the wise women at the hands of the armed actors. This research made it possible to understand the importance of using tools such as the violence meter to support processes of resistance to violence against women, girls, and sexually diverse people; however, it is essential that these tools be adapted to the specific contexts of the people. In the analysis of violence, it was possible to identify that these not only affect women, girls, and sexually diverse people individually but also have collective effects that threaten the territory and the ancestral culture to which they belong. Ancestrality has been the object of violence, but at the same time, it has been the place from which resistance has been organized. The identification of the violence suffered by women, girls, and sexually diverse people is also an opportunity to make visible the forms of resistance of women and communities in the face of this violence. This study examines how women, girls, and sexually diverse people in Buenaventura have been exposed to sexism and racism, which historically have been translated into specific forms of violence, in addition to the other forms of violence already identified by the traditional models of the violentometer. A qualitative approach was used in the study. The study included the participation of more than 40 people and two women's organizations from Buenaventura. The participants came from both urban and rural areas of the municipality of Buenaventura and were over 15 years of age. The participation of such a diverse group allowed for the exchange of knowledge and experiences, particularly between younger and older people. The instrument used for the exercise was previously defined with the leaders of the organizations and consisted of four moments that referred to i) ancestry, ii) threats to ancestry, iii) identification of resistance and iv) construction of the ancestral violentometer.Keywords: violence against women, intersectionality, sexual and reproductive rights, black communities
Procedia PDF Downloads 80847 Cotton Fabrics Functionalized with Green and Commercial Ag Nanoparticles
Authors: Laura Gonzalez, Santiago Benavides, Martha Elena Londono, Ana Elisa Casas, Adriana Restrepo-Osorio
Abstract:
Cotton products are sensitive to microorganisms due to its ability to retain moisture, which might cause change into the coloration, mechanical properties reduction or foul odor generation; consequently, this represents risks to the health of users. Nowadays, have been carried out researches to give antibacterial properties to textiles using different strategies, which included the use of silver nanoparticles (AgNPs). The antibacterial behavior can be affected by laundering process reducing its effectiveness. In the other way, the environmental impact generated for the synthetic antibacterial agents has motivated to seek new and more ecological ways for produce AgNPs. The aims of this work are to determine the antibacterial activity of cotton fabric functionalized with green (G) and commercial (C) AgNPs after twenty washing cycles, also to evaluate morphological and color changes. A plain weave cotton fabric suitable for dyeing and two AgNPs solutions were use. C a commercial product and G produced using an ecological method, both solutions with 0.5 mM concentration were impregnated on cotton fabric without stabilizer, at a liquor to fabric ratio of 1:20 in constant agitation during 30min and then dried at 70 °C by 10 min. After that the samples were subjected to twenty washing cycles using phosphate-free detergent simulated on agitated flask at 150 rpm, then were centrifuged and dried on a tumble. The samples were characterized using Kirby-Bauer test determine antibacterial activity against E. coli y S. aureus microorganisms, the results were registered by photographs establishing the inhibition halo before and after the washing cycles, the tests were conducted in triplicate. Scanning electron microscope (SEM) was used to observe the morphologies of cotton fabric and treated samples. The color changes of cotton fabrics in relation to the untreated samples were obtained by spectrophotometer analysis. The images, reveals the presence of inhibition halo in the samples treated with C and G AgNPs solutions, even after twenty washing cycles, which indicated a good antibacterial activity and washing durability, with a tendency to better results against to S. aureus bacteria. The presence of AgNPs on the surface of cotton fiber and morphological changes were observed through SEM, after and before washing cycles. The own color of the cotton fiber has been significantly altered with both antibacterial solutions. According to the colorimetric results, the samples treated with C lead to yellowing while the samples modified with G to red yellowing Cotton fabrics treated AgNPs C and G from 0.5 mM solutions exhibited excellent antimicrobial activity against E. coli and S. aureus with good laundering durability effects. The surface of the cotton fibers was modified with the presence of AgNPs C and G due to the presence of NPs and its agglomerates. There are significant changes in the natural color of cotton fabric due to deposition of AgNPs C and G which were maintained after laundering process.Keywords: antibacterial property, cotton fabric, fastness to wash, Kirby-Bauer test, silver nanoparticles
Procedia PDF Downloads 246846 Teamwork on Innovation in Young Enterprises: A Qualitative Analysis
Authors: Polina Trusova
Abstract:
The majority of young enterprises is founded and run by teams and develops new, innovative products or services. While problems within the team are considered to be an important reason for the failure of young enterprises, effective teamwork on innovation may be a key success factor. It may require special teamwork design or members’ creativity not needed during work routine. However, little is known about how young enterprises develop innovative solutions in teams, what makes their teamwork special and what influences its effectivity. Extending this knowledge is essential for understanding the success and failure factors for young enterprises. Previous research focused on working on innovation or professional teams in general. Rare studies combining these issues usually concentrate on homogenous groups like IT expert teams in innovation projects of big, well-established firms. The transferability of those studies’ findings to the entrepreneurial context is doubtful because of several reasons why teamwork should differ significantly between big, well-established firms and young enterprises. First, teamwork is conducted by team members, e.g., employees. The personality of employees in young enterprises, in contrast to that of employees in established firms, has been shown to be more similar to the personality of entrepreneurs. As entrepreneurs were found to be more open to experience and show less risk aversion, it may have a positive impact on their teamwork. Persons open to novelty are more likely to develop or accept a creative solution, which is especially important for teamwork on innovation. Secondly, young enterprises are often characterized by a flat hierarchy, so in general, teamwork should be more participative there. It encourages each member (and not only the founder) to produce and discuss innovative ideas, increasing their variety and enabling the team to select the best idea from the larger idea pool. Thirdly, teams in young enterprises are often multidisciplinary. It has some advantages but also increases the risk of internal conflicts making teamwork less effective. Despite the key role of teamwork on innovation and presented barriers for transferring existing evidence to the context of young enterprises, only a few researchers have addressed this issue. In order to close the existing research gap, to explore and understand how innovations are developed in teams of young enterprises and which factors influencing teamwork may be especially relevant for such teams, a qualitative study has been developed. The study consisting of 20 half-structured interviews with (co-)founders of young innovative enterprises in the UK and USA started in September 2017. The interview guide comprises but is not limited to teamwork dimensions discussed in literature like members’ skill or authority differentiation. Data will be evaluated following the rules of qualitative content analysis. First results indicate some factors which may be relevant especially for teamwork in young innovative enterprises. They will enrich the scientific discussion and provide the evidence needed to test a possible causality between identified factors and teamwork effectivity in future research on young innovative enterprises. Results and their discussion can be presented at the conference.Keywords: innovation, qualitative study, teamwork, young enterprises
Procedia PDF Downloads 198845 Returns to Communities of the Social Entrepreneurship and Environmental Design (SEED) Integration Results in Architectural Training
Authors: P. Kavuma, J. Mukasa, M. Lusunku
Abstract:
Background and Problem: The widespread poverty in Africa- together with the negative impacts of climate change-are two great global challenges that call for everyone’s involvement including Architects. This in particular places serious challenges on architects to have additional skills in both Entrepreneurship and Environmental Design (SEED). Regrettably, while Architectural Training in most African Universities including those from Uganda lack comprehensive implementation of SEED in their curricula, regulatory bodies have not contributed towards the effective integration of SEED in their professional practice. In response to these challenges, Nkumba University (NU) under Architect Kavuma Paul supported by the Uganda Chambers of Architects– initiated the SEED integration in the undergraduate Architectural curricula to cultivate SEED know-how and examples of best practices. Main activities: Initiated in 2007, going beyond the traditional Architectural degree curriculum, the NU Architect department offers SEED courses including provoking passions for creating desirable positive changes in communities. Learning outcomes are assessed theoretically and practically through field projects. The first set of SEED graduates came out in 2012. As part of the NU post-graduation and alumni survey, in October 2014, the pioneer SEED graduates were contacted through automated reminder emails followed by individual, repeated personal follow-ups via email and phone. Out of the 36 graduates who responded to the survey, 24 have formed four (4) private consortium agencies of 5-7 graduates all of whom have pioneered Ugandan-own-cultivated Architectural social projects that include: fishing farming in shipping containers; solar powered mobile homes in shipping containers, solar powered retail kiosks in rural and fishing communities, and floating homes in the flood-prone areas. Primary outcomes: include being business self –reliant in creating the social change the architects desired in the communities. Examples of the SEED project returns to communities reported by the graduates include; employment creation via fabrication, retail business, marketing, improved diets, safety of life and property, decent shelter in the remote mining and oil exploration areas. Negative outcomes-though not yet evaluated include the disposal of used-up materials. Conclusion: The integration of SEED in Architectural Training has established a baseline benchmark and a replicable model based on best practice projects.Keywords: architectural training, entrepreneurship, environment, integration
Procedia PDF Downloads 404844 From By-product To Brilliance: Transforming Adobe Brick Construction Using Meat Industry Waste-derived Glycoproteins
Authors: Amal Balila, Maria Vahdati
Abstract:
Earth is a green building material with very low embodied energy and almost zero greenhouse gas emissions. However, it lacks strength and durability in its natural state. By responsibly sourcing stabilisers, it's possible to enhance its strength. This research draws inspiration from the robustness of termite mounds, where termites incorporate glycoproteins from their saliva during construction. Biomimicry explores the potential of these termite stabilisers in producing bio-inspired adobe bricks. The meat industry generates significant waste during slaughter, including blood, skin, bones, tendons, gastrointestinal contents, and internal organs. While abundant, many meat by-products raise concerns regarding human consumption, religious orders, cultural and ethical beliefs, and also heavily contribute to environmental pollution. Extracting and utilising proteins from this waste is vital for reducing pollution and increasing profitability. Exploring the untapped potential of meat industry waste, this research investigates how glycoproteins could revolutionize adobe brick construction. Bovine serum albumin (BSA) from cows' blood and mucin from porcine stomachs were the chosen glycoproteins used as stabilisers for adobe brick production. Despite their wide usage across various fields, they have very limited utilisation in food processing. Thus, both were identified as potential stabilisers for adobe brick production in this study. Two soil types were utilised to prepare adobe bricks for testing, comparing controlled unstabilised bricks with glycoprotein-stabilised ones. All bricks underwent testing for unconfined compressive strength and erosion resistance. The primary finding of this study is the efficacy of BSA, a glycoprotein derived from cows' blood and a by-product of the beef industry, as an earth construction stabiliser. Adding 0.5% by weight of BSA resulted in a 17% and 41% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Further, adding 5% by weight of BSA led to a 202% and 97% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Moreover, using 0.1%, 0.2%, and 0.5% by weight of BSA resulted in erosion rate reductions of 30%, 48%, and 70% for British adobe bricks, respectively, with a 97% reduction observed for Sudanese adobe bricks at 0.5% by weight of BSA. However, mucin from the porcine stomach did not significantly improve the unconfined compressive strength of adobe bricks. Nevertheless, employing 0.1% and 0.2% by weight of mucin resulted in erosion rate reductions of 28% and 55% for British adobe bricks, respectively. These findings underscore BSA's efficiency as an earth construction stabiliser for wall construction and mucin's efficacy for wall render, showcasing their potential for sustainable and durable building practices.Keywords: biomimicry, earth construction, industrial waste management, sustainable building materials, termite mounds.
Procedia PDF Downloads 51843 Archaic Ontologies Nowadays: Music of Rituals
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
Many of the interrogations or dilemmas of the contemporary world found the answer in what was generically called the appeal to matrix. This genuine spiritual exercise of re-connection of the present to origins, to the primary source, revealed the ontological condition of timelessness, ahistorical, immutable (epi)phenomena, of those pure essences concentrated in the archetypal-referential layer of the human existence. The musical creation was no exception to this trend, the impasse generated by the deterministic excesses of the whole serialism or, conversely, by some questionable results of the extreme indeterminism proper to the avant-garde movements, stimulating the orientation of many composers to rediscover a universal grammar, as an emanation of a new ‘collective’ order (reverse of the utopian individualism). In this context, the music of oral tradition and therefore the world of the ancient modes represented a true revelation for the composers of the twentieth century, who were suddenly in front of some unsuspected (re)sources, with a major impact on all levels of edification of the musical work: morphology, syntax, timbrality, semantics etc. For the contemporary Romanian creators, the music of rituals, existing in the local archaic culture, opened unsuspected perspectives for which it meant to be a synthetic, inclusive and recoverer vision, where the primary (archetypal) genuine elements merge with the latest achievements of language of the European composers. Thus, anchored in a strong and genuine modal source, the compositions analysed in this paper evoke, in a manner as modern as possible, the atmosphere of some ancestral rituals such as: the invocation of rain during the drought (Paparudele, Scaloianul), funeral ceremony (Bocetul), traditions specific to the winter holidays and new year (Colinda, Cântecul de stea, Sorcova, Folklore traditional dances) etc. The reactivity of those rituals in the sound context of the twentieth century meant potentiating or resizing the archaic spirit of the primordial symbolic entities, in terms of some complexity levels generated by the technique of harmonies of chordal layers, of complex aggregates (gravitational or non-gravitational, geometric), of the mixture polyphonies and with global effect (group, mass), by the technique of heterophony, of texture and cluster, leading to the implementation of some processes of collective improvisation and instrumental theatre.Keywords: archetype, improvisation, polyphony, ritual, instrumental theatre
Procedia PDF Downloads 304842 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence
Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej
Abstract:
In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction
Procedia PDF Downloads 105841 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 321840 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring
Procedia PDF Downloads 227839 Policy Initiatives That Increase Mass-Market Participation of Fuel Cell Electric Vehicles
Authors: Usman Asif, Klaus Schmidt
Abstract:
In recent years, the development of alternate fuel vehicles has helped to reduce carbon emissions worldwide. As the number of vehicles will continue to increase in the future, the energy demand will also increase. Therefore, we must consider automotive technologies that are efficient and less harmful to the environment in the long run. Battery Electric Vehicles (BEVs) have gained popularity in recent years because of their lower maintenance, lower fuel costs, and lower carbon emissions. Nevertheless, BEVs show several disadvantages, such as slow charging times and lower range than traditional combustion-powered vehicles. These factors keep many people from switching to BEVs. The authors of this research believe that these limitations can be overcome by using fuel cell technology. Fuel cell technology converts chemical energy into electrical energy from hydrogen power and therefore serves as fuel to power the motor and thus replacing heavy lithium batteries that are expensive and hard to recycle. Also, in contrast to battery-powered electric vehicle technology, Fuel Cell Electric Vehicles (FCEVs) offer higher ranges and lower fuel-up times and therefore are more competitive with electric vehicles. However, FCEVs have not gained the same popularity as electric vehicles due to stringent legal frameworks, underdeveloped infrastructure, high fuel transport, and storage costs plus the expense of fuel cell technology itself. This research will focus on the legal frameworks for hydrogen-powered vehicles, and how a change in these policies may affect and improve hydrogen fueling infrastructure and lower hydrogen transport and storage costs. These policies may also facilitate reductions in fuel cell technology costs. In order to attain a better framework, a number of countries have developed conceptual roadmaps. These roadmaps have set out a series of objectives to increase the access of FCEVs to their respective markets. This research will specifically focus on policies in Japan, Europe, and the USA in their attempt to shape the automotive industry of the future. The researchers also suggest additional policies that may help to accelerate the advancement of FCEVs to mass-markets. The approach was to provide a solid literature review using resources from around the globe. After a subsequent analysis and synthesis of this review, the authors concluded that in spite of existing legal challenges that have hindered the advancement of fuel-cell technology in the automobile industry in the past, new initiatives that enhance and advance the very same technology in the future are underway.Keywords: fuel cell electric vehicles, fuel cell technology, legal frameworks, policies and regulations
Procedia PDF Downloads 117838 Dangerous Words: A Moral Economy of HIV/AIDS in Swaziland
Authors: Robin Root
Abstract:
A fundamental premise of medical anthropology is that clinical phenomena are simultaneously cultural, political, and economic: none more so than the linked acronyms HIV/AIDS. For the medical researcher, HIV/AIDS signals an epidemiological pandemic and a pathophysiology. For persons diagnosed with an HIV-related condition, the acronym often conjures dread, too often marking and marginalizing the afflicted irretrievably. Critical medical anthropology is uniquely equipped to theorize the linkages that bind individual and social wellbeing to global structural and culture-specific phenomena. This paper reports findings from an anthropological study of HIV/AIDS in Swaziland, site of the highest HIV prevalence in the world. The project, initiated in 2005, has documented experiences of HIV/AIDS, religiosity, and treatment and care as well as drought and famine. Drawing on interviews with Swazi religious and traditional leaders about their experiences of leadership amidst worsening economic conditions, environmental degradation, and an ongoing global health crisis, the paper provides uncommon insights for global health practitioners whose singular paradigm for designing and delivering interventions is biomedically-based. In contrast, this paper details the role of local leaders in mediating extreme social suffering and resilience in ways that medical science cannot model but which radically impact how sickness is experienced and health services are delivered and accessed. Two concepts help to organize the paper’s argument. First, a ‘moral economy of language’ is central to showing up the implicit ‘technologies of knowledge’ that inhere in scientific and religious discourses of HIV/AIDS; people draw upon these discourses strategically to navigate highly vulnerable conditions. Second, Paulo Freire’s ethnographic focus on a culture’s 'dangerous words' opens up for examination how ‘sex’ is dangerous for religion and ‘god’ is dangerous for science. The paper interrogates hegemonic and ‘lived’ discourses, both biomedical and religious, and contributes to an important literature on the moral economies of health, a framework of explication and, importantly, action appropriate to a wide-range of contemporary global health phenomena. The paper concludes by asserting that it is imperative that global health planners reflect upon and ‘check’ their hegemonic policy platforms by, one, collaborating with local authoritative agents of ‘what sickness means and how it is best treated,’ and, two, taking account of the structural barriers to achieving good health.Keywords: Africa, biomedicine, HIV/AIDS, qualitative research , religion
Procedia PDF Downloads 103837 Impact of National Institutions on Corporate Social Performance
Authors: Debdatta Mukherjee, Abhiman Das, Amit Garg
Abstract:
In recent years, there is a growing interest about corporate social responsibility of firms in both academic literature and business world. Since business forms a part of society incorporating socio-environment concerns into its value chain, activities are vital for ensuring mutual sustainability and prosperity. But, until now most of the works have been either descriptive or normative rather than positivist in tone. Even the few ones with a positivist approach have mostly studied the link between corporate financial performance and corporate social performance. However, these studies have been severely criticized by many eminent authors on grounds that they lack a theoretical basis for their findings. They have also argued that apart from corporate financial performance, there must be certain other crucial influences that are likely to determine corporate social performance of firms. In fact, several studies have indicated that firms operating in distinct national institutions show significant variations in the corporate social responsibility practices that they undertake. This clearly suggests that the institutional context of a country in which the firms operate is a key determinant of corporate social performance of firms. Therefore, this paper uses an institutional framework to understand why corporate social performance of firms vary across countries. It examines the impact of country level institutions on corporate social performance using a sample of 3240 global publicly-held firms across 33 countries covering the period 2010-2015. The country level institutions include public institutions, private institutions, markets and capacity to innovate. Econometric Analysis has been mainly used to assess this impact. A three way panel data analysis using fixed effects has been used to test and validate appropriate hypotheses. Most of the empirical findings confirm our hypotheses and the economic significance indicates the specific impact of each variable and their importance relative to others. The results suggest that institutional determinants like ethical behavior of private institutions, goods market, labor market and innovation capacity of a country are significantly related to the corporate social performance of firms. Based on our findings, few implications for policy makers from across the world have also been suggested. The institutions in a country should promote competition. The government should use policy levers for upgrading home demands, like setting challenging yet flexible safety, quality and environment standards, and framing policies governing buyer information, providing innovative recourses to low quality goods and services and promoting early adoption of new and technologically advanced products. Moreover, the institution building in a country should be such that they facilitate and improve the capacity of firms to innovate. Therefore, the proposed study argues that country level institutions impact corporate social performance of firms, empirically validates the same, suggest policy implications and attempts to contribute to an extended understanding of corporate social responsibility and corporate social performance in a multinational context.Keywords: corporate social performance, corporate social responsibility, institutions, markets
Procedia PDF Downloads 166