Search results for: network user rules
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7709

Search results for: network user rules

869 Analysing the Moderating Effect of Customer Loyalty on Long Run Repurchase Intentions

Authors: John Akpesiri Olotewo

Abstract:

One of the controversies in existing marketing literatures is on how to retain existing and new customers to have repurchase intention in the long-run; however, empirical answer to this question is scanty in existing studies. Thus, this study investigates the moderating effect of consumer loyalty on long-run repurchase intentions in telecommunication industry using Lagos State environs. The study adopted field survey research design using questionnaire to elicit responses from 250 respondents who were selected using random and stratified random sampling techniques from the telecommunication industry in Lagos State, Nigeria. The internal consistency of the research instrument was verified using the Cronbach’s alpha, the result of 0.89 implies the acceptability of the internal consistency of the survey instrument. The test of the research hypotheses were analyzed using Pearson Product Method of Correlation (PPMC), simple regression analysis and inferential statistics with the aid of Statistical Package for Social Science version 20.0 (SPSS). The study confirmed that customer satisfaction has a significant relationship with customer loyalty in the telecommunication industry; also Service quality has a significant relationship with customer loyalty to a brand; loyalty programs have a significant relationship with customer loyalty to a network operator in Nigeria and Customer loyalty has a significant effect on the long run repurchase intentions of the customer. The study concluded that one of the determinants of long term profitability of a business entity is the long run repurchase intentions of its customers which hinges on the level of brand loyalty of the customer. Thus, it was recommended that service providers in Nigeria should improve on factors like customer satisfaction, service quality, and loyalty programs in order to increase the loyalty of their customer to their brands thereby increasing their repurchase intentions.

Keywords: customer loyalty, long run repurchase intentions, brands, service quality and customer satisfaction

Procedia PDF Downloads 233
868 Reimagining the Management of Telco Supply Chain with Blockchain

Authors: Jeaha Yang, Ahmed Khan, Donna L. Rodela, Mohammed A. Qaudeer

Abstract:

Traditional supply chain silos still exist today due to the difficulty of establishing trust between various partners and technological barriers across industries. Companies lose opportunities and revenue and inadvertently make poor business decisions resulting in further challenges. Blockchain technology can bring a new level of transparency through sharing information with a distributed ledger in a decentralized manner that creates a basis of trust for business. Blockchain is a loosely coupled, hub-style communication network in which trading partners can work indirectly with each other for simpler integration, but they work together through the orchestration of their supply chain operations under a coherent process that is developed jointly. A Blockchain increases efficiencies, lowers costs, and improves interoperability to strengthen and automate the supply chain management process while all partners share the risk. Blockchain ledger is built to track inventory lifecycle for supply chain transparency and keeps a journal of inventory movement for real-time reconciliation. State design patterns are used to capture the life cycle (behavior) of inventory management as a state machine for a common, transparent and coherent process which creates an opportunity for trading partners to become more responsive in terms of changes or improvements in process, reconcile discrepancies, and comply with internal governance and external regulations. It enables end-to-end, inter-company visibility at the unit level for more accurate demand planning with better insight into order fulfillment and replenishment.

Keywords: supply chain management, inventory trace-ability, perpetual inventory system, inventory lifecycle, blockchain, inventory consignment, supply chain transparency, digital thread, demand planning, hyper ledger fabric

Procedia PDF Downloads 90
867 The Impact of PM-Based Regulations on the Concentration and Sources of Fine Organic Carbon in the Los Angeles Basin from 2005 to 2015

Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Sina Taghvaee, Constantinos Sioutas

Abstract:

A significant portion of PM₂.₅ mass concentration is carbonaceous matter (CM), which majorly exists in the form of organic carbon (OC). Ambient OC originates from a multitude of sources and plays an important role in global climate effects, visibility degradation, and human health. In this study, positive matrix factorization (PMF) was utilized to identify and quantify the long-term contribution of PM₂.₅ sources to total OC mass concentration in central Los Angeles (CELA) and Riverside (i.e., receptor site), using the chemical speciation network (CSN) database between 2005 and 2015, a period during which several state and local regulations on tailpipe emissions were implemented in the area. Our PMF resolved five different factors, including tailpipe emissions, non-tailpipe emissions, biomass burning, secondary organic aerosol (SOA), and local industrial activities for both sampling sites. The contribution of vehicular exhaust emissions to the OC mass concentrations significantly decreased from 3.5 µg/m³ in 2005 to 1.5 µg/m³ in 2015 (by about 58%) at CELA, and from 3.3 µg/m³ in 2005 to 1.2 µg/m³ in 2015 (by nearly 62%) at Riverside. Additionally, SOA contribution to the total OC mass, showing higher levels at the receptor site, increased from 23% in 2005 to 33% and 29% in 2010 and 2015, respectively, in Riverside, whereas the corresponding contribution at the CELA site was 16%, 21% and 19% during the same period. The biomass burning maintained an almost constant relative contribution over the whole period. Moreover, while the adopted regulations and policies were very effective at reducing the contribution of tailpipe emissions, they have led to an overall increase in the fractional contributions of non-tailpipe emissions to total OC in CELA (about 14%, 28%, and 28% in 2005, 2010 and 2015, respectively) and Riverside (22%, 27% and 26% in 2005, 2010 and 2015), underscoring the necessity to develop equally effective mitigation policies targeting non-tailpipe PM emissions.

Keywords: PM₂.₅, organic carbon, Los Angeles megacity, PMF, source apportionment, non-tailpipe emissions

Procedia PDF Downloads 198
866 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States

Authors: Alexis Polanco Jr, Tsai Lu Liu

Abstract:

Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.

Keywords: deaf, hard of hearing, design, guidelines, education, assessment

Procedia PDF Downloads 67
865 Present-Day Transformations and Trends in Rooftop Agriculture and Food Security

Authors: Kiara Lawrence, Nadine Ponnusamy, Clive Greenstone

Abstract:

One of the major challenges facing society today is food security. The risks to food security have increased significantly due to the evolving urban landscape, globalization, and a rising population. The cultivation of food is essential, particularly during times of crisis, such as a recession, and has long been a necessity for urban populations. In contemporary society, many urban residents are confronted with new challenges, including high levels of unemployment, which compel individuals to adopt alternative survival strategies, such as growing their own food. Recently, rooftop agriculture has made significant contributions to urban and national food security and has been utilized as a tool to mitigate the frequent and damaging disasters that many cities encounter. They have the potential to transform unused spaces into green, productive vegetable plots, while also providing urban residents with the opportunity to enjoy the benefits of gardening. Therefore, this study looks to investigate the evolving themes around rooftop agriculture and food security globally. A bibliometric review analysis was carried out on Scopus and Web of Science using the keywords “rooftop agriculture” OR “rooftop farming” OR “rooftop garden” AND “food security” between 2004 and 2024 to ensure a broader scope was covered around the chosen study. Vosviewer software was then utilized to analyze the extracted data to create network visualization maps based on keyword occurrences, co-author analysis, country analysis. There were only 37 relevant documents within the study parameters. Preliminary results indicate that much research focused on urban agriculture, food supply, green roof, sustainability and climate change. By analysing these aspects of rooftop agriculture and food security, the trends can identify gaps in literature and dictate future applications to assist in food security.

Keywords: food security, rooftop agriculture, rooftop farming, rooftop garden

Procedia PDF Downloads 16
864 Shared Vision System Support for Maintenance Tasks of Wind Turbines

Authors: Buket Celik Ünal, Onur Ünal

Abstract:

Communication is the most challenging part of maintenance operations. Communication between expert and fieldworker is crucial for effective maintenance and this also affects the safety of the fieldworkers. To support a machine user in a remote collaborative physical task, both, a mobile and a stationary device are needed. Such a system is called a shared vision system and the system supports two people to solve a problem from different places. This system reduces the errors and provides a reliable support for qualified and less qualified users. Through this research, it was aimed to validate the effectiveness of using a shared vision system to facilitate communication between on-site workers and those issuing instructions regarding maintenance or inspection works over long distances. The system is designed with head-worn display which is called a shared vision system. As a part of this study, a substitute system is used and implemented by using a shared vision system for maintenance operation. The benefits of the use of a shared vision system are analyzed and results are adapted to the wind turbines to improve the occupational safety and health for maintenance technicians. The motivation for the research effort in this study can be summarized in the following research questions: -How can expert support technician over long distances during maintenance operation? -What are the advantages of using a shared vision system? Experience from the experiment shows that using a shared vision system is an advantage for both electrical and mechanical system failures. Results support that the shared vision system can be used for wind turbine maintenance and repair tasks. Because wind turbine generator/gearbox and the substitute system have similar failures. Electrical failures, such as voltage irregularities, wiring failures and mechanical failures, such as alignment, vibration, over-speed conditions are the common and similar failures for both. Furthermore, it was analyzed the effectiveness of the shared vision system by using a smart glasses in connection with the maintenance task performed by a substitute system under four different circumstances, namely by using a shared vision system, an audio communication, a smartphone and by yourself condition. A suitable method for determining dependencies between factors measured in Chi Square Test, and Chi Square Test for Independence measured for determining a relationship between two qualitative variables and finally Mann Whitney U Test is used to compare any two data sets. While based on this experiment, no relation was found between the results and the gender. Participants` responses confirmed that the shared vision system is efficient and helpful for maintenance operations. From the results of the research, there was a statistically significant difference in the average time taken by subjects on works using a shared vision system under the other conditions. Additionally, this study confirmed that a shared vision system provides reduction in time to diagnose and resolve maintenance issues, reduction in diagnosis errors, reduced travel costs for experts, and increased reliability in service.

Keywords: communication support, maintenance and inspection tasks, occupational health and safety, shared vision system

Procedia PDF Downloads 260
863 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 110
862 Functionalized Nano porous Ceramic Membranes for Electrodialysis Treatment of Harsh Wastewater

Authors: Emily Rabe, Stephanie Candelaria, Rachel Malone, Olivia Lenz, Greg Newbloom

Abstract:

Electrodialysis (ED) is a well-developed technology for ion removal in a variety of applications. However, many industries generate harsh wastewater streams that are incompatible with traditional ion exchange membranes. Membrion® has developed novel ceramic-based ion exchange membranes (IEMs) offering several advantages over traditional polymer membranes: high performance in low pH, chemical resistance to oxidizers, and a rigid structure that minimizes swelling. These membranes are synthesized with our patented silane-based sol-gel techniques. The pore size, shape, and network structure are engineered through a molecular self-assembly process where thermodynamic driving forces are used to direct where and how pores form. Either cationic or anionic groups can be added within the membrane nanopore structure to create cation- and anion-exchange membranes. The ceramic IEMs are produced on a roll-to-roll manufacturing line with low-temperature processing. Membrane performance testing is conducted using in-house permselectivity, area-specific resistance, and ED stack testing setups. Ceramic-based IEMs show comparable performance to traditional IEMs and offer some unique advantages. Long exposure to highly acidic solutions has a negligible impact on ED performance. Additionally, we have observed stable performance in the presence of strong oxidizing agents such as hydrogen peroxide. This stability is expected, as the ceramic backbone of these materials is already in a fully oxidized state. This data suggests ceramic membranes, made using sol-gel chemistry, could be an ideal solution for acidic and/or oxidizing wastewater streams from processes such as semiconductor manufacturing and mining.

Keywords: ion exchange, membrane, silane chemistry, nanostructure, wastewater

Procedia PDF Downloads 86
861 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 312
860 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 143
859 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 110
858 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 534
857 Application of a Confirmatory Composite Model for Assessing the Extent of Agricultural Digitalization: A Case of Proactive Land Acquisition Strategy (PLAS) Farmers in South Africa

Authors: Mazwane S., Makhura M. N., Ginege A.

Abstract:

Digitalization in South Africa has received considerable attention from policymakers. The support for the development of the digital economy by the South African government has been demonstrated through the enactment of various national policies and strategies. This study sought to develop an index for agricultural digitalization by applying composite confirmatory analysis (CCA). Another aim was to determine the factors that affect the development of digitalization in PLAS farms. Data on the indicators of the three dimensions of digitalization were collected from 300 Proactive Land Acquisition Strategy (PLAS) farms in South Africa using semi-structured questionnaires. Confirmatory composite analysis (CCA) was employed to reduce the items into three digitalization dimensions and ultimately to a digitalization index. Standardized digitalization index scores were extracted and fitted to a linear regression model to determine the factors affecting digitalization development. The results revealed that the model shows practical validity and can be used to measure digitalization development as measures of fit (geodesic distance, standardized root mean square residual, and squared Euclidean distance) were all below their respective 95%quantiles of bootstrap discrepancies (HI95 values). Therefore, digitalization is an emergent variable that can be measured using CCA. The average level of digitalization in PLAS farms was 0.2 and varied significantly across provinces. The factors that significantly influence digitalization development in PLAS land reform farms were age, gender, farm type, network type, and cellular data type. This should enable researchers and policymakers to understand the level of digitalization and patterns of development, as well as correctly attribute digitalization development to the contributing factors.

Keywords: agriculture, digitalization, confirmatory composite model, land reform, proactive land acquisition strategy, South Africa

Procedia PDF Downloads 63
856 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria

Authors: Rinat Arbel-Goren, Joel Stavans

Abstract:

Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.

Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript

Procedia PDF Downloads 164
855 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 130
854 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 197
853 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 113
852 Impact of Charging PHEV at Different Penetration Levels on Power System Network

Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat

Abstract:

Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.

Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile

Procedia PDF Downloads 287
851 Toxic Masculinity as Dictatorship: Gender and Power Struggles in Tomás Eloy Martínez´s Novels

Authors: Mariya Dzhyoyeva

Abstract:

In the present paper, I examine manifestations of toxic masculinity in the novels by Tomás Eloy Martínez, a post-Boom author, journalist, literary critic, and one of the representatives of the Argentine writing diaspora. I focus on the analysis of Martínez´s characters that display hypermasculine traits to define the relationship between toxic masculinity and power, including the power of authorship and violence as they are represented in his novels. The analysis reveals a complex network in which gender, power, and violence are intertwined and influence and modify each other. As the author exposes toxic masculine behaviors that generate violence, he looks to undermine them. Departing from M. Kimmel´s idea of masculinity as homophobia, I examine how Martínez “outs” his characters by incorporating into the narrative some secret, privileged sources that provide alternative accounts of their otherwise hypermasculine lives. These background stories expose their “weaknesses,” both physical and mental, and thereby feminize them in their own eyes. In a similar way, the toxic masculinity of the fictional male author that wields his power by abusing the written word as he abuses the female character in the story is exposed as a complex of insecurities accumulated by the character due to his childhood trauma. The artistic technique that Martínez uses to condemn the authoritarian male behavior is accessing his subjectivity and subverting it through a multiplicity of identities. Martínez takes over the character’s “I” and turns it into a host of pronouns with a constantly shifting point of reference that distorts not only the notions of gender but also the very notion of identity. In doing so, he takes the character´s affirmation of masculinity to the limit where the very idea of it becomes unsustainable. Viewed in the context of Martínez´s own exilic story, the condemnation of toxic masculine power turns into the condemnation of dictatorship and authoritarianism.

Keywords: gender, masculinity., toxic masculinity, authoritarian, Argentine literature, Martínez

Procedia PDF Downloads 71
850 Comparison of Two Neural Networks To Model Margarine Age And Predict Shelf-Life Using Matlab

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

The present study was aimed at developing & comparing two neural-network-based predictive models to predict shelf-life/product age of South African margarine using free fatty acid (FFA), water droplet size (D3.3), water droplet distribution (e-sigma), moisture content, peroxide value (PV), anisidine valve (AnV) and total oxidation (totox) value as input variables to the model. Brick margarine products which had varying ages ranging from fresh i.e. week 0 to week 47 were sourced. The brick margarine products which had been stored at 10 & 25 °C and were characterized. JMP and MATLAB models to predict shelf-life/ margarine age were developed and their performances were compared. The key performance indicators to evaluate the model performances were correlation coefficient (CC), root mean square error (RMSE), and mean absolute percentage error (MAPE) relative to the actual data. The MATLAB-developed model showed a better performance in all three performance indicators. The correlation coefficient of the MATLAB model was 99.86% versus 99.74% for the JMP model, the RMSE was 0.720 compared to 1.005 and the MAPE was 7.4% compared to 8.571%. The MATLAB model was selected to be the most accurate, and then, the number of hidden neurons/ nodes was optimized to develop a single predictive model. The optimized MATLAB with 10 neurons showed a better performance compared to the models with 1 & 5 hidden neurons. The developed models can be used by margarine manufacturers, food research institutions, researchers etc, to predict shelf-life/ margarine product age, optimize addition of antioxidants, extend shelf-life of products and proactively troubleshoot for problems related to changes which have an impact on shelf-life of margarine without conducting expensive trials.

Keywords: margarine shelf-life, predictive modelling, neural networks, oil oxidation

Procedia PDF Downloads 197
849 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman

Abstract:

Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.

Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma

Procedia PDF Downloads 338
848 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 93
847 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 59
846 Monitoring of Indoor Air Quality in Museums

Authors: Olympia Nisiforou

Abstract:

The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.

Keywords: exibitions, indoor air quality , VOCs, pollution

Procedia PDF Downloads 123
845 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 79
844 Nondestructive Acoustic Microcharacterisation of Gamma Irradiation Effects on Sodium Oxide Borate Glass X2Na2O-X2B2O3 by Acoustic Signature

Authors: Ibrahim Al-Suraihy, Abdellaziz Doghmane, Zahia Hadjoub

Abstract:

We discuss in this work the elastic properties by using acoustic microscopes to measure Rayleigh and longitudinal wave velocities in a no radiated and radiated sodium borate glasses X2Na2O-X2B2O3 with 0 ≤ x ≤ 27 (mol %) at microscopic resolution. The acoustic material signatures were first measured, from which the characteristic surface velocities were determined.Longitudinal and shear ultrasonic velocities were measured in a different composition of sodium borate glass samples before and after irradiation with γ-rays. Results showed that the effect due to increasing sodium oxide content on the ultrasonic velocity appeared more clearly than due to γ-radiation. It was found that as Na2O composition increases, longitudinal velocities vary from 3832 to 5636 m/s in irradiated sample and it vary from 4010 to 5836 m/s in high radiated sample by 10 dose whereas shear velocities vary from 2223 to 3269 m/s in irradiated sample and it vary from 2326 m/s in low radiation to 3385 m/s in high radiated sample by 10 dose. The effect of increasing sodium oxide content on ultrasonic velocity was very clear. The increase of velocity was attributed to the gradual increase in the rigidity of glass and hence strengthening of network due to gradual change of boron atoms from the three-fold to the four-fold coordination of oxygen atoms. The ultrasonic velocities data of glass samples have been used to find the elastic modulus. It was found that ultrasonic velocity, elastic modulus and microhardness increase with increasing barium oxide content and increasing γ-radiation dose.

Keywords: mechanical properties X2Na2O-X2B2O3, acoustic signature, SAW velocities, additives, gamma-radiation dose

Procedia PDF Downloads 396
843 Incorporation of Growth Factors onto Hydrogels via Peptide Mediated Binding for Development of Vascular Networks

Authors: Katie Kilgour, Brendan Turner, Carly Catella, Michael Daniele, Stefano Menegatti

Abstract:

In vivo, the extracellular matrix (ECM) provides biochemical and mechanical properties that are instructional to resident cells to form complex tissues with characteristics to develop and support vascular networks. In vitro, the development of vascular networks can be guided by biochemical patterning of substrates via spatial distribution and display of peptides and growth factors to prompt cell adhesion, differentiation, and proliferation. We have developed a technique utilizing peptide ligands that specifically bind vascular endothelial growth factor (VEGF), erythropoietin (EPO), or angiopoietin-1 (ANG1) to spatiotemporally distribute growth factors to cells. This allows for the controlled release of each growth factor, ultimately enhancing the formation of a vascular network. Our engineered tissue constructs (ETCs) are fabricated out of gelatin methacryloyl (GelMA), which is an ideal substrate for tailored stiffness and bio-functionality, and covalently patterned with growth factor specific peptides. These peptides mimic growth factor receptors, facilitating the non-covalent binding of the growth factors to the ETC, allowing for facile uptake by the cells. We have demonstrated in the absence of cells the binding affinity of VEGF, EPO, and ANG1 to their respective peptides and the ability for each to be patterned onto a GelMA substrate. The ability to organize growth factors on an ETC provides different functionality to develop organized vascular networks. Our results demonstrated a method to incorporate biochemical cues into ETCs that enable spatial and temporal control of growth factors. Future efforts will investigate the cellular response by evaluating gene expression, quantifying angiogenic activity, and measuring the speed of growth factor consumption.

Keywords: growth factor, hydrogel, peptide, angiogenesis, vascular, patterning

Procedia PDF Downloads 164
842 Re-Integrating Historic Lakes into the City Fabric in the Case of Vandiyur Lake, Madurai

Authors: Soumya Pugal

Abstract:

The traditional lake system of an ancient town is a network of water holding blue spaces, erected further than 2000 years ago by the rulers of ancient cities and maintained for centuries by the original communities. These blue spaces form a micro-watershed wherein an individual tank has its own catchment, tank bed area, and command area. These lakes are connected by a common sluice from the upstream tank, thereby feeding the downstream tank. The lakes used to be of socio-economic significance in those times, but the rapid growth of the city, as well as the change in systems of ownership of the lakes, have turned them into the backyard of urban development. Madurai is one such historic city to be facing the issues of finding a balance to the social, ecological, and profitable requirements of the people with respect to the traditional lake system. To find a solution to problems caused by the neglect of vital ecological systems of a city, the theory of transformative placemaking through water sensitive urban design has been explored. This approach re-invents the relationship between the people and the urban lakes to suit the modern aspirations while respecting the environment. The thesis aims to develop strategies to guide the development along the major urban lake of Vandiyur to equip the lake to meet the growing requirements of the megacity in terms of its recreational requirements and give a renewed connection between people and water. The intent of the design is to understand the ecological and social structures of the lake and find ways to use the lake to produce social cohesion within the community and balance the city's profitable and ecological requirements by using transformative placemaking through water sensitive urban design..

Keywords: urban lakes, urban blue spaces, placemaking, revitalisation of lakes, urban cohesion

Procedia PDF Downloads 75
841 The Image of Saddam Hussein and Collective Memory: The Semiotics of Ba'ath Regime's Mural in Iraq (1980-2003)

Authors: Maryam Pirdehghan

Abstract:

During the Ba'ath Party's rule in Iraq, propaganda was utilized to justify and to promote Saddam Hussein's image in the collective memory as the greatest Arab leader. Consequently, urban walls were routinely covered with images of Saddam. Relying on these images, the regime aimed to provide a basis for evoking meanings in the public opinion, which would supposedly strengthen Saddam’s power and reconstruct facts to legitimize his political ideology. Nonetheless, Saddam was not always portrayed with common and explicit elements but in certain periods of his rule, the paintings depicted him in an unusual context, where various historical and contemporary elements were combined in a narrative background. Therefore, an understanding of the implied socio-political references of these elements is required to fully elucidate the impact of these images on forming the memory and collective unconscious of the Iraqi people. To obtain such understanding, one needs to address the following questions: a) How Saddam Hussein is portrayed in mural during his rule? b) What of elements and mythical-historical narratives are found in the paintings? c) Which Saddam's political views were subject to the collective memory through mural? Employing visual semiotics, this study reveals that during Saddam Hussein's regime, the paintings were initially simple portraits but gradually transformed into narrative images, characterized by a complex network of historical, mythical and religious elements. These elements demonstrate the transformation of a secular-nationalist politician into a Muslim ruler who tried to instill three major policies in domestic and international relations i.e. the arabization of Iraq, as well as the propagation of pan-arabism ideology (first period), the implementation of anti-Israel policy (second period) and the implementation of anti-American-British policy (last period).

Keywords: Ba'ath Party, Saddam Hussein, mural, Iraq, propaganda, collective memory

Procedia PDF Downloads 326
840 The Use of Unmanned Aerial System (UAS) in Improving the Measurement System on the Example of Textile Heaps

Authors: Arkadiusz Zurek

Abstract:

The potential of using drones is visible in many areas of logistics, especially in terms of their use for monitoring and control of many processes. The technologies implemented in the last decade concern new possibilities for companies that until now have not even considered them, such as warehouse inventories. Unmanned aerial vehicles are no longer seen as a revolutionary tool for Industry 4.0, but rather as tools in the daily work of factories and logistics operators. The research problem is to develop a method for measuring the weight of goods in a selected link of the clothing supply chain by drones. However, the purpose of this article is to analyze the causes of errors in traditional measurements, and then to identify adverse events related to the use of drones for the inventory of a heap of textiles intended for production purposes. On this basis, it will be possible to develop guidelines to eliminate the causes of these events in the measurement process using drones. In a real environment, work was carried out to determine the volume and weight of textiles, including, among others, weighing a textile sample to determine the average density of the assortment, establishing a local geodetic network, terrestrial laser scanning and photogrammetric raid using an unmanned aerial vehicle. As a result of the analysis of measurement data obtained in the facility, the volume and weight of the assortment and the accuracy of their determination were determined. In this article, this work presents how such heaps are currently being tested, what adverse events occur, indicate and describes the current use of photogrammetric techniques of this type of measurements so far performed by external drones for the inventory of wind farms or construction of the station and compare them with the measurement system of the aforementioned textile heap inside a large-format facility.

Keywords: drones, unmanned aerial system, UAS, indoor system, security, process automation, cost optimization, photogrammetry, risk elimination, industry 4.0

Procedia PDF Downloads 86