Search results for: traditional religious culture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8802

Search results for: traditional religious culture

432 Curcumin and Its Analogues: Potent Natural Antibacterial Compounds against Staphylococcus aureus

Authors: Prince Kumar, Shamseer Kulangara Kandi, Diwan S. Rawat, Kasturi Mukhopadhyay

Abstract:

Staphylococcus aureus is the most pathogenic of all staphylococci, a major cause of nosocomial infections, and known for acquiring resistance towards various commonly used antibiotics. Due to the widespread use of synthetic drugs, clinicians are now facing a serious threat in healthcare. The increasing resistance in staphylococci has created a need for alternatives to these synthetic drugs. One of the alternatives is a natural plant-based medicine for both disease prevention as well as the treatment of chronic diseases. Among such natural compounds, curcumin is one of the most studied molecules and has been an integral part of traditional medicines and Ayurveda from ancient times. It is a natural polyphenolic compound with diverse pharmacological effects, including anti-inflammatory, antioxidant, anti-cancerous and antibacterial activities. In spite of its efficacy and potential, curcumin has not been approved as a therapeutic agent yet, because of its low solubility, low bioavailability, and rapid metabolism in vivo. The presence of central β-diketone moiety in curcumin is responsible for its rapid metabolism. To overcome this, in the present study, curcuminoids were designed by modifying the central β-diketone moiety of curcumin into mono carbonyl moiety and their antibacterial potency against S. aureus ATCC 29213 was determined. Further, the mode of action and hemolytic activity of the most potent curcuminoids were studied. Minimum inhibitory concentration (MIC) and in vitro killing kinetics were used to study the antibacterial activity of the designed curcuminoids. For hemolytic assay, mouse Red blood cells were incubated with curcuminoids and hemoglobin release was measured spectrophotometrically. The mode of action of curcuminoids was analysed by membrane depolarization assay using membrane potential sensitive dye 3,3’-dipropylthiacarbocyanine iodide (DiSC3(5)) through spectrofluorimetry and membrane permeabilization assay using calcein-AM through flow cytometry. Antibacterial screening of the designed library (61 curcuminoids) revealed excellent in vitro potency of six compounds against S. aureus (MIC 8 to 32 µg/ml). Moreover, these six compounds were found to be non-hemolytic up to 225 µg/ml that is much higher than their corresponding MIC values. The in vitro killing kinetics data showed five of these lead compounds to be bactericidal causing >3 log reduction in the viable cell count within 4 hrs at 5 × MIC while the sixth compound was found to be bacteriostatic. Depolarization assay revealed that all the six curcuminoids caused depolarization in their corresponding MIC range. Further, the membrane permeabilization assay showed that all the six curcuminoids caused permeabilization at 5 × MIC in 2 hrs. This membrane depolarization and permeabilization caused by curcuminoids found to be in correlation with their corresponding killing efficacy. Both these assays point out that membrane perturbations might be a primary mode of action for these curcuminoids. Overall, the present study leads us six water soluble, non-hemolytic, membrane-active curcuminoids and provided an impetus for further research on therapeutic use of these lead curcuminoids against S. aureus.

Keywords: antibacterial, curcumin, minimum inhibitory concentration , Staphylococcus aureus

Procedia PDF Downloads 173
431 Global Winners versus Local Losers: Globalization Identity and Tradition in Spanish Club Football

Authors: Jim O'brien

Abstract:

Contemporary global representation and consumption of La Liga across a plethora of media platform outlets has resulted in significant implications for the historical, political and cultural developments which shaped the development of Spanish club football. This has established and reinforced a hierarchy of a small number of teams belonging to or aspiring to belong to a cluster of global elite clubs seeking to imitate the blueprint of the English Premier League in respect of corporate branding and marketing in order to secure a global fan base through success and exposure in La Liga itself and through the Champions League. The synthesis between globalization, global sport and the status of high profile clubs has created radical change within the folkloric iconography of Spanish football. The main focus of this paper is to critically evaluate the consequences of globalization on the rich tapestry at the core of the game’s distinctive history in Spain. The seminal debate underpinning the study considers whether the divergent aspects of globalization have acted as a malevolent force, eroding tradition, causing financial meltdown and reducing much of the fabric of club football to the status of by standers, or have promoted a renaissance of these traditions, securing their legacies through new fans and audiences. The study draws on extensive sources on the history, politics and culture of Spanish football, in both English and Spanish. It also uses primary and archive material derived from interviews and fieldwork undertaken with scholars, media professionals and club representatives in Spain. The paper has four main themes. Firstly, it contextualizes the key historical, political and cultural forces which shaped the landscape of Spanish football from the late nineteenth century. The seminal notions of region, locality and cultural divergence are pivotal to this discourse. The study then considers the relationship between football, ethnicity and identity as a barometer of continuity and change, suggesting that tradition is being reinvented and re-framed to reflect the shifting demographic and societal patterns within the Spanish state. Following on from this, consideration is given to the paradoxical function of ‘El Clasico’ and the dominant duopoly of the FC Barcelona – Real Madrid axis in both eroding tradition in the global nexus of football’s commodification and in protecting historic political rivalries. To most global consumers of La Liga, the mega- spectacle and hyperbole of ‘El Clasico’ is the essence of Spanish football, with cultural misrepresentation and distortion catapulting the event to the global media audience. Finally, the paper examines La Liga as a sporting phenomenon in which elite clubs, cult managers and galacticos serve as commodities on the altar of mass consumption in football’s global entertainment matrix. These processes accentuate a homogenous mosaic of cultural conformity which obscures local, regional and national identities and paradoxically fuses the global with the local to maintain the distinctive hue of La Liga, as witnessed by the extraordinary successes of Athletico Madrid and FC Eibar in recent seasons.

Keywords: Spanish football, globalization, cultural identity, tradition, folklore

Procedia PDF Downloads 306
430 Preparation of Activated Carbon From Waste Feedstock: Activation Variables Optimization and Influence

Authors: Oluwagbemi Victor Aladeokin

Abstract:

In the last decade, the global peanut cultivation has seen increased demand, which is attributed to their health benefits, rising to ~ 41.4 MMT in 2019/2020. Peanut and other nutshells are considered as waste in various parts of the world and are usually used for their fuel value. However, this agricultural by-product can be converted to a higher value product such as activated carbon. For many years, due to the highly porous structure of activated carbon, it has been widely and effectively used as an adsorbent in the purification and separation of gases and liquids. Those used for commercial purposes are primarily made from a range of precursors such as wood, coconut shell, coal, bones, etc. However, due to difficulty in regeneration and high cost, various agricultural residues such as rice husk, corn stalks, apricot stones, almond shells, coffee beans, etc, have been explored to produce activated carbons. In the present study, the potential of peanut shells as precursors in the production of activated carbon and their adsorption capacity is investigated. Usually, precursors used to produce activated carbon have carbon content above 45 %. A typical raw peanut shell has 42 wt.% carbon content. To increase the yield, this study has employed chemical activation method using zinc chloride. Zinc chloride is well known for its effectiveness in increasing porosity of porous carbonaceous materials. In chemical activation, activation temperature and impregnation ratio are parameters commonly reported to be the most significant, however, this study has also studied the influence of activation time on the development of activated carbon from peanut shells. Activated carbons are applied for different purposes, however, as the application of activated carbon becomes more specific, an understanding of the influence of activation variables to have a better control of the quality of the final product becomes paramount. A traditional approach to experimentally investigate the influence of the activation parameters, involves varying each parameter at a time. However, a more efficient way to reduce the number of experimental runs is to apply design of experiment. One of the objectives of this study is to optimize the activation variables. Thus, this work has employed response surface methodology of design of experiment to study the interactions between the activation parameters and consequently optimize the activation parameters (temperature, impregnation ratio, and activation time). The optimum activation conditions found were 485 °C, 15 min and 1.7, temperature, activation time, and impregnation ratio respectively. The optimum conditions resulted in an activated carbon with relatively high surface area ca. 1700 m2/g, 47 % yield, relatively high density, low ash, and high fixed carbon content. Impregnation ratio and temperature were found to mostly influence the final characteristics of the produced activated carbon from peanut shells. The results of this study, using response surface methodology technique, have revealed the potential and the most significant parameters that influence the chemical activation process, of peanut shells to produce activated carbon which can find its use in both liquid and gas phase adsorption applications.

Keywords: chemical activation, fixed carbon, impregnation ratio, optimum, surface area

Procedia PDF Downloads 150
429 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 190
428 Prospects of Low Immune Response Transplants Based on Acellular Organ Scaffolds

Authors: Inna Kornienko, Svetlana Guryeva, Anatoly Shekhter, Elena Petersen

Abstract:

Transplantation is an effective treatment option for patients suffering from different end-stage diseases. However, it is plagued by a constant shortage of donor organs and the subsequent need of a lifelong immunosuppressive therapy for the patient. Currently some researchers look towards using of pig organs to replace human organs for transplantation since the matrix derived from porcine organs is a convenient substitute for the human matrix. As an initial step to create a new ex vivo tissue engineered model, optimized protocols have been created to obtain organ-specific acellular matrices and evaluated their potential as tissue engineered scaffolds for culture of normal cells and tumor cell lines. These protocols include decellularization by perfusion in a bioreactor system and immersion-agitation on an orbital shaker with use of various detergents (SDS, Triton X-100) and freezing. Complete decellularization – in terms of residual DNA amount – is an important predictor of probability of immune rejection of materials of natural origin. However, the signs of cellular material may still remain within the matrix even after harsh decellularization protocols. In this regard, the matrices obtained from tissues of low-immunogenic pigs with α3Galactosyl-tranferase gene knock out (GalT-KO) may be a promising alternative to native animal sources. The research included a study of induced effect of frozen and fresh fragments of GalT-KO skin on healing of full-thickness plane wounds in 80 rats. Commercially available wound dressings (Ksenoderm, Hyamatrix and Alloderm) as well as allogenic skin were used as a positive control and untreated wounds were analyzed as a negative control. The results were evaluated on the 4th day after grafting, which corresponds to the time of start of normal wound epithelization. It has been shown that a non-specific immune response in models treated with GalT-Ko pig skin was milder than in all the control groups. Research has been performed to measure technical skin characteristics: stiffness and elasticity properties, corneometry, tevametry, and cutometry. These metrics enabled the evaluation of hydratation level, corneous layer husking level, as well as skin elasticity and micro- and macro-landscape. These preliminary data may contribute to development of personalized transplantable organs from GalT-Ko pigs with significantly limited potential of immune rejection. By applying growth factors to a decellularized skin sample it is possible to achieve various regenerative effects based on the particular situation. In this particular research BMP2 and Heparin-binding EGF-like growth factor have been used. Ideally, a bioengineered organ must be biocompatible, non-immunogenic and support cell growth. Porcine organs are attractive for xenotransplantation if severe immunologic concerns can be bypassed. The results indicate that genetically modified pig tissues with knock-outed α3Galactosyl-tranferase gene may be used for production of low-immunogenic matrix suitable for transplantation.

Keywords: decellularization, low-immunogenic, matrix, scaffolds, transplants

Procedia PDF Downloads 279
427 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 576
426 Status of Vocational Education and Training in India: Policies and Practices

Authors: Vineeta Sirohi

Abstract:

The development of critical skills and competencies becomes imperative for young people to cope with the unpredicted challenges of the time and prepare for work and life. Recognizing that education has a critical role in reaching sustainability goals as emphasized by 2030 agenda for sustainability development, educating youth in global competence, meta-cognitive competencies, and skills from the initial stages of formal education are vital. Further, educating for global competence would help in developing work readiness and boost employability. Vocational education and training in India as envisaged in various policy documents remain marginalized in practice as compared to general education. The country is still far away from the national policy goal of tracking 25% of the secondary students at grade eleven and twelve under the vocational stream. In recent years, the importance of skill development has been recognized in the present context of globalization and change in the demographic structure of the Indian population. As a result, it has become a national policy priority and taken up with renewed focus by the government, which has set the target of skilling 500 million people by 2022. This paper provides an overview of the policies, practices, and current status of vocational education and training in India supported by statistics from the National Sample Survey, the official statistics of India. The national policy documents and annual reports of the organizations actively involved in vocational education and training have also been examined to capture relevant data and information. It has also highlighted major initiatives taken by the government to promote skill development. The data indicates that in the age group 15-59 years, only 2.2 percent reported having received formal vocational training, and 8.6 percent have received non-formal vocational training, whereas 88.3 percent did not receive any vocational training. At present, the coverage of vocational education is abysmal as less than 5 percent of the students are covered by the vocational education programme. Besides, launching various schemes to address the mismatch of skills supply and demand, the government through its National Policy on Skill Development and Entrepreneurship 2015 proposes to bring about inclusivity by bridging the gender, social and sectoral divide, ensuring that the skilling needs of socially disadvantaged and marginalized groups are appropriately addressed. It is fundamental that the curriculum is aligned with the demands of the labor market, incorporating more of the entrepreneur skills. Creating nonfarm employment opportunities for educated youth will be a challenge for the country in the near future. Hence, there is a need to formulate specific skill development programs for this sector and also programs for upgrading their skills to enhance their employability. There is a need to promote female participation in work and in non-traditional courses. Moreover, rigorous research and development of a robust information base for skills are required to inform policy decisions on vocational education and training.

Keywords: policy, skill, training, vocational education

Procedia PDF Downloads 158
425 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group

Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb

Abstract:

Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.

Keywords: broadband services, customer experience quality, loyalty, net promoters score

Procedia PDF Downloads 271
424 Multilocal Youth and the Berlin Digital Industry: Productive Leisure as a Key Factor in European Migration

Authors: Stefano Pelaggi

Abstract:

The research is focused on youth labor and mobility in Berlin. Mobility has become a common denominator in our daily lives but it does not primarily move according to monetary incentives. Labor, knowledge and leisure overlap on this point as cities are trying to attract people who could participate in production of the innovations while the new migrants are experiencing the lifestyle of the host cities. The research will present the project of empirical study focused on Italian workers in the digital industry in Berlin, trying to underline the connection between pleasure, leisure with the choice of life abroad. Berlin has become the epicenter of the European Internet start-up scene, but people suitable to work for digital industries are not moving in Berlin to make a career, most of them are attracted to the city for different reasons. This point makes a clear exception to traditional migration flows, which are always originated from a specific search of employment opportunities or strong ties, usually families, in a place that could guarantee success in finding a job. Even the skilled migration has always been originated from a specific need, finding the right path for a successful professional life. In a society where the lack of free time in our calendar seems to be something to be ashamed, the actors of youth mobility incorporate some categories of experiential tourism within their own life path. Professional aspirations, lifestyle choices of the protagonists of youth mobility are geared towards meeting the desires and aspirations that define leisure. While most of creative work places, in particular digital industries, uses the category of fun as a primary element of corporate policy, virtually extending the time to work for the whole day; more and more people around the world are deciding their path in life, career choices on the basis of indicators linked to the realization of the self, which may include factors like a warm climate, cultural environment. All indicators that are usually eradicated from the hegemonic approach to labor. The interpretative framework commonly used seems to be mostly focused on a dualism between Florida's theories and those who highlight the absence of conflict in his studies. While the flexibility of the new creative industries is minimizing leisure, incorporating elements of leisure itself in work activities, more people choose their own path of life by placing great importance to basic needs, through a gaze on pleasure that is only partially driven by consumption. The multi localism is the co-existence of different identities and cultures that do not conflict because they reject the bind on territory. Local loses its strength of opposition to global, with an attenuation of the whole concept of citizenship, territory and even integration. A similar perspective could be useful to search a new approach to all the studies dedicated to the gentrification process, while studying the new migrations flow.

Keywords: brain drain, digital industry, leisure and gentrification, multi localism

Procedia PDF Downloads 248
423 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning

Authors: Rajkumar Ghosh

Abstract:

Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.

Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery

Procedia PDF Downloads 91
422 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 195
421 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 163
420 Surveying Adolescent Males in India Regarding Mobile Phone Use and Sexual and Reproductive Health Education

Authors: Rohan M. Dalal, Elena Pirondini, Shanu Somvanshi

Abstract:

Introduction: The current state of reproductive health outcomes in lower-income countries is poor, with inadequate knowledge and culture among adolescent boys. Moreover, boys have traditionally not been a priority target. To explore the opportunity to educate adolescent boys in the developing world regarding accurate reproductive health information, the purpose of this study is to investigate how adolescent boys in the developing world engage and use technology, utilizing cell phones. This electronic survey and video interview study were conducted to determine the feasibility of a mobile phone platform for an educational video game specifically designed for boys that will improve health knowledge, influence behavior, and change health outcomes, namely teen pregnancies. Methods: With the assistance of Plan India, a subsidiary of Plan International, informed consent was obtained from parents of adolescent males who participated in an electronic survey and video interviews via Microsoft Teams. An electronic survey was created with 27 questions, including topics of mobile phone usage, gaming preferences, and sexual and reproductive health, with a sample size of 181 adolescents, ages 11-25, near New Delhi, India. The interview questions were written to explore more in-depth topics after the completion of the electronic survey. Eight boys, aged 15, were interviewed for 40 minutes about gaming and usage of mobile phones as well as sexual and reproductive health. Data/Results. 154 boys and 27 girls completed the survey. They rated their English fluency as relatively high. 97% of boys (149/154) had access to mobile phones. The majority of phones were smartphones (97%, 143/148). 48% (71/149) of boys borrowed cell phones. The most popular phone platform was Samsung (22%, 33/148). 36% (54/148) of adolescent males looked at their phones 1-10 times per day for 1-2 hours. 55% (81/149) of the boys had parental restrictions. 51% (76/148) had 32 GB of storage on their phone. 78% (117/150) of the boys had wifi access. 80% (120/150) of respondents reported ease in downloading apps. 97% (145/150) of male adolescents had social media, including WhatsApp, Facebook, and YouTube. 58% (87/150) played video games. Favorite video games included Free Fire, PubG, and other shooting games. In the video interviews, the boys revealed what made games fun and engaging, including customized avatars, progression to higher levels, realistic interactive platforms, shooting/guns, the ability to perform multiple actions, and a variety of worlds/settings/adventures. Ideas to improve engagement in sexual and reproductive health classes included open discussions in the community, enhanced access to information, and posting on social media. Conclusion: This study involving an electronic survey and video interviews provides an initial foray into understanding mobile phone usage among adolescent males and understanding sexual and reproductive health education in New Delhi, India. The data gathered from this study support using mobile phone platforms, and this will be used to create a serious video game to educate adolescent males about sexual and reproductive health in an attempt to lower the rate of unwanted pregnancies in the world.

Keywords: adolescent males, India, mobile phone, sexual and reproductive health

Procedia PDF Downloads 135
419 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil

Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado

Abstract:

Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.

Keywords: building information modeling (BIM), BIM education, BIM process, design teaching

Procedia PDF Downloads 158
418 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 102
417 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions

Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes

Abstract:

The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.

Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning

Procedia PDF Downloads 77
416 Microfungi on Sandy Beaches: Potential Threats for People Enjoying Lakeside Recreation

Authors: Tomasz Balabanski, Anna Biedunkiewicz

Abstract:

Research on basic bacteriological and physicochemical parameters conducted by state institutions (Provincial Sanitary and Epidemiological Station and District Sanitary and Epidemiological Station) are limited to bathing waters under constant sanitary and epidemiological supervision. Unfortunately, no routine or monitoring tests are carried out for the presence of microfungi. This also applies to beach sand used for recreational purposes. The purpose of the planned own research was to determine the diversity of the mycobiota present on supervised and unsupervised sandy beaches, on the shores of lakes, of municipal baths used for recreation. The research material consisted of microfungi isolated from April to October 2019 from sandy beaches of supervised and unsupervised lakes located within the administrative boundaries of the city of Olsztyn (North-Eastern Poland, Europe). Four lakes, out of the fifteen available (Tyrsko, Kortowskie, Skanda, and Ukiel), whose bathing waters are subjected to routine bacteriological tests, were selected for testing. To compare the diversity of the mycobiota composition on the surface and below the sand mixing layer, samples were taken from two depths (10 cm and 50 cm), using a soil auger. Micro-fungi from sand samples were obtained by surface inoculation on an RBC medium from the 1st dilution (1:10). After incubation at 25°C for 96-144 h, the average number of CFU/dm³ was counted. Morphologically differing yeast colonies were passaged into Sabouraud agar slants with gentamicin and incubated again. For detailed laboratory analyses, culture methods (macro- and micro-cultures) and identification methods recommended in diagnostic mycological laboratories were used. The conducted research allowed obtaining 140 yeast isolates. The total average population ranged from 1.37 × 10⁻² CFU/dm³ before the bathing season (April 2019), 1.64 × 10⁻³ CFU/dm³ in the season (May-September 2019), and 1.60 × 10⁻² CFU/dm³ after the end of the season (October 2019). More microfungi were obtained from the surface layer of sand (100 isolates) than from the deeper layer (40 isolates). Reported microfungi may circulate seasonally between individual elements of the lake ecosystem. From the sand/soil from the catchment area beaches, they can get into bathing waters, stopping periodically on the coastal phyllosphere. The sand of the beaches and the phyllosphere are a kind of filter for the water reservoir. The presence of microfungi with various pathogenicity potential in these places is of major epidemiological importance. Therefore, full monitoring of not only recreational waters but also sandy beaches should be treated as an element of constant control by appropriate supervisory institutions, allowing recreational areas for public use so that the use of these places does not involve the risk of infection. Acknowledgment: 'Development Program of the University of Warmia and Mazury in Olsztyn', POWR.03.05.00-00-Z310/17, co-financed by the European Union under the European Social Fund from the Operational Program Knowledge Education Development. Tomasz Bałabański is a recipient of a scholarship from the Programme Interdisciplinary Doctoral Studies in Biology and Biotechnology (POWR.03.05.00-00-Z310/17), which is funded by the 'European Social Fund'.

Keywords: beach, microfungi, sand, yeasts

Procedia PDF Downloads 107
415 Anti-Hyperglycemic Effects and Chemical Analysis of Allium sativum Bulbs Growing in Sudan

Authors: Ikram Mohamed Eltayeb Elsiddig, Yacouba Amina Djamila, Amna El Hassan Hamad

Abstract:

Hyperglycemia and diabetes have been treated with several medicinal plants for a long time, meanwhile reduce associated side effects than the synthetic ones. Therefore, the search for more effective and safer anti-diabetic agents derived from plants has become an interest area of active research. A. sativum, belonging to the Liliaceae family is well known for its medicinal uses in African traditional medicine, it used for treating of many human diseases mainly diabetes, high cholesterol, and high blood pressure. The present study was carried out to investigate the anti-hyperglycemic effect of the extracts of A. sativum bulb growing in Sudan on glucose-loaded Wistar albino rats. A. sativum bulbs were collected from local vegetable market at Khourtoum/ Sudan in a fresh form, identified and authenticated by taxonomist, then dried, and extracted with solvents of increasing polarity: petroleum ether, chloroform, ethyl acetate and methanol by using Soxhlet apparatus. The effect of the extracts on glucose uptake was evaluated by using the isolated rats hemidiaphgrams after loading the fasting rats with glucose, and the anti-hyperglycemic effect was investigated on glucose-loaded Wistar albino rats. Their effects were compared to control rats administered with the vehicle and to a standard group administered with Metformin standard drug. The most active extract was analyzed chemically using GC-MS analysis compared to NIST library. The results showed significant anti-diabetic effect of extracts of A. sativum bulb growing in Sudan. Addition to the hypoglycemic activity of A. sativum extracts was found to be decreased with increase in the polarity of the extraction solvent; this may explain the less polarity of substance responsible for the activity and their concentration decreased with polarity increase. The petroleum ether extract possess anti-hyperglycemic activity more significant than the other extracts and the Metformin standard drug with p-value 0.000** of 400mg/kg at 1 hour, 2 hour and four hour; and p-value 0.019*, 0.015* and 0.010* of 200mg/kg at 1 hour, 2 hour and four hour respectively. The GC-MS analysis of petroleum ether extract, with highest anti -diabetes activity showed the presence of Methyl linolate (42.75%), Hexadecanoic acid, methyl ester (10.54%), Methyl α-linolenate (8.36%), Dotriacontane (6.83), Tetrapentacontane (6.33), Methyl 18-methylnonadecanoate (4.8), Phenol,2,2’-methylenebis[6-(1,1-dimethylethyl)-4-methyl] (3.25), Methyl 20-methyl-heneicosanoate (2.70), Pentatriacontane (2.13) and many other minor compounds. The most of these compounds are well known for their anti-diabetic activity. The study concluded that A. sativum bulbs extracts were found to enhanced the reuptake of glucose in the isolated rat hemidiaphragm and have antihyperglycemic effect when evaluated on glucose-loaded albino rats with petroleum ether extract activity more significant than the Metformin standard drug.

Keywords: Allium, anti-hyperglycemic, bulbs, sativum

Procedia PDF Downloads 171
414 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 31
413 Customer Focus in Digital Economy: Case of Russian Companies

Authors: Maria Evnevich

Abstract:

In modern conditions, in most markets, price competition is becoming less effective. On the one hand, there is a gradual decrease in the level of marginality in main traditional sectors of the economy, so further price reduction becomes too ‘expensive’ for the company. On the other hand, the effect of price reduction is leveled, and the reason for this phenomenon is likely to be informational. As a result, it turns out that even if the company reduces prices, making its products more accessible to the buyer, there is a high probability that this will not lead to increase in sales unless additional large-scale advertising and information campaigns are conducted. Similarly, a large-scale information and advertising campaign have a much greater effect itself than price reductions. At the same time, the cost of mass informing is growing every year, especially when using the main information channels. The article presents generalization, systematization and development of theoretical approaches and best practices in the field of customer focus approach to business management and in the field of relationship marketing in the modern digital economy. The research methodology is based on the synthesis and content-analysis of sociological and marketing research and on the study of the systems of working with consumer appeals and loyalty programs in the 50 largest client-oriented companies in Russia. Also, the analysis of internal documentation on customers’ purchases in one of the largest retail companies in Russia allowed to identify if buyers prefer to buy goods for complex purchases in one retail store with the best price image for them. The cost of attracting a new client is now quite high and continues to grow, so it becomes more important to keep him and increase the involvement through marketing tools. A huge role is played by modern digital technologies used both in advertising (e-mailing, SEO, contextual advertising, banner advertising, SMM, etc.) and in service. To implement the above-described client-oriented omnichannel service, it is necessary to identify the client and work with personal data provided when filling in the loyalty program application form. The analysis of loyalty programs of 50 companies identified the following types of cards: discount cards, bonus cards, mixed cards, coalition loyalty cards, bank loyalty programs, aviation loyalty programs, hybrid loyalty cards, situational loyalty cards. The use of loyalty cards allows not only to stimulate the customer to purchase ‘untargeted’, but also to provide individualized offers, as well as to produce more targeted information. The development of digital technologies and modern means of communication has significantly changed not only the sphere of marketing and promotion, but also the economic landscape as a whole. Factors of competitiveness are the digital opportunities of companies in the field of customer orientation: personalization of service, customization of advertising offers, optimization of marketing activity and improvement of logistics.

Keywords: customer focus, digital economy, loyalty program, relationship marketing

Procedia PDF Downloads 165
412 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors

Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl

Abstract:

An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.

Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids

Procedia PDF Downloads 131
411 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 169
410 Investigation of Ground Disturbance Caused by Pile Driving: Case Study

Authors: Thayalan Nall, Harry Poulos

Abstract:

Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.

Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening

Procedia PDF Downloads 240
409 New Media and the Personal Vote in General Elections: A Comparison of Constituency Level Candidates in the United Kingdom and Japan

Authors: Sean Vincent

Abstract:

Within the academic community, there is a consensus that political parties in established liberal democracies are facing a myriad of organisational challenges as a result of falling membership, weakening links to grass-roots support and rising voter apathy. During the same period of party decline and growing public disengagement political parties have become increasingly professionalised. The professionalisation of political parties owes much to changes in technology, with television becoming the dominant medium for political communication. In recent years, however, it has become clear that a new medium of communication is becoming utilised by political parties and candidates – New Media. New Media, a term hard to define but related to internet based communication, offers a potential revolution in political communication. It can be utilised by anyone with access to the internet and its most widely used platforms of communication such as Facebook and Twitter, are free to use. The advent of Web 2.0 has dramatically changed what can be done with the Internet. Websites now allow candidates at the constituency level to fundraise, organise and set out personalised policies. Social media allows them to communicate with supporters and potential voters practically cost-free. As such candidate dependency on the national party for resources and image now lies open to debate. Arguing that greater candidate independence may be a natural next step in light of the contemporary challenges faced by parties, this paper examines how New Media is being used by candidates at the constituency level to increase their personal vote. The paper will present findings from research carried out during two elections – the Japanese Lower House election of 2014 and the UK general election of 2015. During these elections a sample of candidates, totalling 150 candidates, from the three biggest parties in each country were selected and their new media output, specifically candidate websites, Twitter and Facebook output subjected to content analysis. The analysis examines how candidates are using new media to both become more functionally, through fundraising and volunteer mobilisation and politically, through the promotion of personal/local policies, independent from the national party. In order to validate the results of content analysis this paper will also present evidence from interviews carried out with 17 candidates that stood in the 2014 Japanese Lower House election or 2015 UK general election. With a combination of statistical analysis and interviews, several conclusions can be made about the use of New Media at constituency level. The findings show not just a clear difference in the way candidates from each country are using New Media but also differences within countries based upon the particular circumstances of each constituency. While it has not yet replaced traditional methods of fundraising and activist mobilisation, New Media is also becoming increasingly important in campaign organisation and the general consensus amongst candidates is that its importance will continue to grow along as politics in both countries becomes more diffuse.

Keywords: political campaigns, elections, new media, political communication

Procedia PDF Downloads 230
408 Developing Early Intervention Tools: Predicting Academic Dishonesty in University Students Using Psychological Traits and Machine Learning

Authors: Pinzhe Zhao

Abstract:

This study focuses on predicting university students' cheating tendencies using psychological traits and machine learning techniques. Academic dishonesty is a significant issue that compromises the integrity and fairness of educational institutions. While much research has been dedicated to detecting cheating behaviors after they have occurred, there is limited work on predicting such tendencies before they manifest. The aim of this research is to develop a model that can identify students who are at higher risk of engaging in academic misconduct, allowing for earlier interventions to prevent such behavior. Psychological factors are known to influence students' likelihood of cheating. Research shows that traits such as test anxiety, moral reasoning, self-efficacy, and achievement motivation are strongly linked to academic dishonesty. High levels of anxiety may lead students to cheat as a way to cope with pressure. Those with lower self-efficacy are less confident in their academic abilities, which can push them toward dishonest behaviors to secure better outcomes. Students with weaker moral judgment may also justify cheating more easily, believing it to be less wrong under certain conditions. Achievement motivation also plays a role, as students driven primarily by external rewards, such as grades, are more likely to cheat compared to those motivated by intrinsic learning goals. In this study, data on students’ psychological traits is collected through validated assessments, including scales for anxiety, moral reasoning, self-efficacy, and motivation. Additional data on academic performance, attendance, and engagement in class are also gathered to create a more comprehensive profile. Using machine learning algorithms such as Random Forest, Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks, the research builds models that can predict students’ cheating tendencies. These models are trained and evaluated using metrics like accuracy, precision, recall, and F1 scores to ensure they provide reliable predictions. The findings demonstrate that combining psychological traits with machine learning provides a powerful method for identifying students at risk of cheating. This approach allows for early detection and intervention, enabling educational institutions to take proactive steps in promoting academic integrity. The predictive model can be used to inform targeted interventions, such as counseling for students with high test anxiety or workshops aimed at strengthening moral reasoning. By addressing the underlying factors that contribute to cheating behavior, educational institutions can reduce the occurrence of academic dishonesty and foster a culture of integrity. In conclusion, this research contributes to the growing body of literature on predictive analytics in education. It offers a approach by integrating psychological assessments with machine learning to predict cheating tendencies. This method has the potential to significantly improve how academic institutions address academic dishonesty, shifting the focus from punishment after the fact to prevention before it occurs. By identifying high-risk students and providing them with the necessary support, educators can help maintain the fairness and integrity of the academic environment.

Keywords: academic dishonesty, cheating prediction, intervention strategies, machine learning, psychological traits, academic integrity

Procedia PDF Downloads 27
407 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process

Authors: Stehr Melanie

Abstract:

The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.

Keywords: B2B buying process, crisis, economics of information theory, information channel

Procedia PDF Downloads 186
406 Exploring the Energy Saving Benefits of Solar Power and Hot Water Systems: A Case Study of a Hospital in Central Taiwan

Authors: Ming-Chan Chung, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang, Ming-Jyh Chen

Abstract:

introduction: Hospital buildings require considerable energy, including air conditioning, lighting, elevators, heating, and medical equipment. Energy consumption in hospitals is expected to increase significantly due to innovative equipment and continuous development plans. Consequently, the environment and climate will be adversely affected. Hospitals should therefore consider transforming from their traditional role of saving lives to being at the forefront of global efforts to reduce carbon dioxide emissions. As healthcare providers, it is our responsibility to provide a high-quality environment while using as little energy as possible. Purpose / Methods: Compare the energy-saving benefits of solar photovoltaic systems and solar hot water systems. The proportion of electricity consumption effectively reduced after the installation of solar photovoltaic systems. To comprehensively assess the potential benefits of utilizing solar energy for both photovoltaic (PV) and solar thermal applications in hospitals, a solar PV system was installed covering a total area of 28.95 square meters in 2021. Approval was obtained from the Taiwan Power Company to integrate the system into the hospital's electrical infrastructure for self-use. To measure the performance of the system, a dedicated meter was installed to track monthly power generation, which was then converted into area output using an electric energy conversion factor. This research aims to compare the energy efficiency of solar PV systems and solar thermal systems. Results: Using the conversion formula between electrical and thermal energy, we can compare the energy output of solar heating systems and solar photovoltaic systems. The comparative study draws upon data from February 2021 to February 2023, wherein the solar heating system generated an average of 2.54 kWh of energy per panel per day, while the solar photovoltaic system produced 1.17 kWh of energy per panel per day, resulting in a difference of approximately 2.17 times between the two systems. Conclusions: After conducting statistical analysis and comparisons, it was found that solar thermal heating systems offer higher energy and greater benefits than solar photovoltaic systems. Furthermore, an examination of literature data and simulations of the energy and economic benefits of solar thermal water systems and solar-assisted heat pump systems revealed that solar thermal water systems have higher energy density values, shorter recovery periods, and lower power consumption than solar-assisted heat pump systems. Through monitoring and empirical research in this study, it has been concluded that a heat pump-assisted solar thermal water system represents a relatively superior energy-saving and carbon-reducing solution for medical institutions. Not only can this system help reduce overall electricity consumption and the use of fossil fuels, but it can also provide more effective heating solutions.

Keywords: sustainable development, energy conservation, carbon reduction, renewable energy, heat pump system

Procedia PDF Downloads 86
405 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.

Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate

Procedia PDF Downloads 520
404 Teachers Engagement to Teaching: Exploring Australian Teachers’ Attribute Constructs of Resilience, Adaptability, Commitment, Self/Collective Efficacy Beliefs

Authors: Lynn Sheridan, Dennis Alonzo, Hoa Nguyen, Andy Gao, Tracy Durksen

Abstract:

Disruptions to teaching (e.g., COVID-related) have increased work demands for teachers. There is an opportunity for research to explore evidence-informed steps to support teachers. Collective evidence informs data on teachers’ personal attributes (e.g., self-efficacy beliefs) in the workplace are seen to promote success in teaching and support teacher engagement. Teacher engagement plays a role in students’ learning and teachers’ effectiveness. Engaged teachers are better at overcoming work-related stress, burnout and are more likely to take on active roles. Teachers’ commitment is influenced by a host of personal (e.g., teacher well-being) and environmental factors (e.g., job stresses). The job demands-resources model provided a conceptual basis for examining how teachers’ well-being, and is influenced by job demands and job resources. Job demands potentially evoke strain and exceed the employee’s capability to adapt. Job resources entail what the job offers to individual teachers (e.g., organisational support), helping to reduce job demands. The application of the job demands-resources model involves gathering an evidence-base of and connection to personal attributes (job resources). The study explored the association between constructs (resilience, adaptability, commitment, self/collective efficacy) and a teacher’s engagement with the job. The paper sought to elaborate on the model and determine the associations between key constructs of well-being (resilience, adaptability), commitment, and motivation (self and collective-efficacy beliefs) to teachers’ engagement in teaching. Data collection involved online a multi-dimensional instrument using validated items distributed from 2020-2022. The instrument was designed to identify construct relationships. The participant number was 170. Data Analysis: The reliability coefficients, means, standard deviations, skewness, and kurtosis statistics for the six variables were completed. All scales have good reliability coefficients (.72-.96). A confirmatory factor analysis (CFA) and structural equation model (SEM) were performed to provide measurement support and to obtain latent correlations among factors. The final analysis was performed using structural equation modelling. Several fit indices were used to evaluate the model fit, including chi-square statistics and root mean square error of approximation. The CFA and SEM analysis was performed. The correlations of constructs indicated positive correlations exist, with the highest found between teacher engagement and resilience (r=.80) and the lowest between teacher adaptability and collective teacher efficacy (r=.22). Given the associations; we proceeded with CFA. The CFA yielded adequate fit: CFA fit: X (270, 1019) = 1836.79, p < .001, RMSEA = .04, and CFI = .94, TLI = .93 and SRMR = .04. All values were within the threshold values, indicating a good model fit. Results indicate that increasing teacher self-efficacy beliefs will increase a teacher’s level of engagement; that teacher ‘adaptability and resilience are positively associated with self-efficacy beliefs, as are collective teacher efficacy beliefs. Implications for school leaders and school systems: 1. investing in increasing teachers’ sense of efficacy beliefs to manage work demands; 2. leadership approaches can enhance teachers' adaptability and resilience; and 3. a culture of collective efficacy support. Preparing teachers for now and in the future offers an important reminder to policymakers and school leaders on the importance of supporting teachers’ personal attributes when faced with the challenging demands of the job.

Keywords: collective teacher efficacy, teacher self-efficacy, job demands, teacher engagement

Procedia PDF Downloads 133
403 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa

Authors: Tashinga Matindike

Abstract:

By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.

Keywords: digital art, digitisation, technology, archiving, transformation and repatriation

Procedia PDF Downloads 55