Search results for: open circuit test
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12535

Search results for: open circuit test

11845 Assessing Vertical Distribution of Soil Organic Carbon Stocks in Westleigh Soil under Shrub Encroached Rangeland, Limpopo Province, South Africa

Authors: Abel L. Masotla, Phesheya E. Dlamini, Vusumuzi E. Mbanjwa

Abstract:

Accurate quantification of the vertical distribution of soil organic carbon (SOC) in relation to land cover transformations, associated with shrub encroachment is crucial because deeper lying horizons have been shown to have greater capacity to sequester SOC. Despite this, in-depth soil carbon dynamics remain poorly understood, especially in arid and semi-arid rangelands. The objective of this study was to quantify and compare the vertical distribution of soil organic carbon stocks (SOCs) in shrub-encroached and open grassland sites. To achieve this, soil samples were collected vertically at 10 cm depth intervals under both sites. The results showed that SOC was on average 19% and 13% greater in the topsoil and subsoil respectively, under shrub-encroached grassland compared to open grassland. In both topsoil and subsoil, lower SOCs were found under shrub-encroached (4.53 kg m⁻² and 3.90 kgm⁻²) relative to open grassland (4.39 kgm⁻² and 3.67 kgm⁻²). These results demonstrate that deeper soil horizon play a critical role in the storage of SOC in savanna grassland.

Keywords: savanna grasslands, shrub-encroachment, soil organic carbon, vertical distribution

Procedia PDF Downloads 141
11844 The Model of Open Cooperativism: The Case of Open Food Network

Authors: Vangelis Papadimitropoulos

Abstract:

This paper is part of the research program “Techno-Social Innovation in the Collaborative Economy”, funded by the Hellenic Foundation for Research and Innovation (H.F.R.I.) for the years 2022-2024. The paper showcases the Open Food Network (OFN) as an open-sourced digital platform supporting short food supply chains in local agricultural production and consumption. The paper outlines the research hypothesis, the theoretical framework, and the methodology of research as well as the findings and conclusions. Research hypothesis: The model of open cooperativism as a vehicle for systemic change in the agricultural sector. Theoretical framework: The research reviews the OFN as an illustrative case study of the three-zoned model of open cooperativism. The OFN is considered a paradigmatic case of the model of open cooperativism inasmuch as it produces commons, it consists of multiple stakeholders including ethical market entities, and it is variously supported by local authorities across the globe, the latter prefiguring the mini role of a partner state. Methodology: Research employs Ernesto Laclau and Chantal Mouffe’s discourse analysis -elements, floating signifiers, nodal points, discourses, logics of equivalence and difference- to analyse the breadth of empirical data gathered through literature review, digital ethnography, a survey, and in-depth interviews with core OFN members. Discourse analysis classifies OFN floating signifiers, nodal points, and discourses into four themes: value proposition, governance, economic policy, and legal policy. Findings: OFN floating signifiers align around the following nodal points and discourses: “digital commons”, “short food supply chains”, “sustainability”, “local”, “the elimination of intermediaries” and “systemic change”. The current research identifies a lack of common ground of what the discourse of “systemic change” signifies on the premises of the OFN’s value proposition. The lack of a common mission may be detrimental to the formation of a common strategy that would be perhaps deemed necessary to bring about systemic change in agriculture. Conclusions: Drawing on Laclau and Mouffe’s discourse theory of hegemony, research introduces a chain of equivalence by aligning discourses such as “agro-ecology”, “commons-based peer production”, “partner state” and “ethical market entities” under the model of open cooperativism, juxtaposed against the current hegemony of neoliberalism, which articulates discourses such as “market fundamentalism”, “privatization”, “green growth” and “the capitalist state” to promote corporatism and entrepreneurship. Research makes the case that for OFN to further agroecology and challenge the current hegemony of industrial agriculture, it is vital that it opens up its supply chains into equivalent sectors of the economy, civil society, and politics to form a chain of equivalence linking together ethical market entities, the commons and a partner state around the model of open cooperativism.

Keywords: sustainability, the digital commons, open cooperativism, innovation

Procedia PDF Downloads 74
11843 A Rapid Reinforcement Technique for Columns by Carbon Fiber/Epoxy Composite Materials

Authors: Faruk Elaldi

Abstract:

There are lots of concrete columns and beams around in our living cities. Those columns are mostly open to aggressive environmental conditions and earthquakes. Mostly, they are deteriorated by sand, wind, humidity and other external applications at times. After a while, these beams and columns need to be repaired. Within the scope of this study, for reinforcement of concrete columns, samples were designed and fabricated to be strengthened with carbon fiber reinforced composite materials and conventional concrete encapsulation and followed by, and they were put into the axial compression test to determine load-carrying performance before column failure. In the first stage of this study, concrete column design and mold designs were completed for a certain load-carrying capacity. Later, the columns were exposed to environmental deterioration in order to reduce load-carrying capacity. To reinforce these damaged columns, two methods were applied, “concrete encapsulation” and the other one “wrapping with carbon fiber /epoxy” material. In the second stage of the study, the reinforced columns were applied to the axial compression test and the results obtained were analyzed. Cost and load-carrying performance comparisons were made and it was found that even though the carbon fiber/epoxy reinforced method is more expensive, this method enhances higher load-carrying capacity and reduces the reinforcement processing period.

Keywords: column reinforcement, composite, earth quake, carbon fiber reinforced

Procedia PDF Downloads 184
11842 The Effect of Substrate Temperature on the Structural, Optical, and Electrical of Nano-Crystalline Tin Doped-Cadmium Telluride Thin Films for Photovoltaic Applications

Authors: Eman A. Alghamdi, A. M. Aldhafiri

Abstract:

It was found that the induce an isolated dopant close to the middle of the bandgap by occupying the Cd position in the CdTe lattice structure is an efficient factor in reducing the nonradiative recombination rate and increasing the solar efficiency. According to our laboratory results, this work has been carried out to obtain the effect of substrate temperature on the CdTe0.6Sn0.4 prepared by thermal evaporation technique for photovoltaic application. Various substrate temperature (25°C, 100°C, 150°C, 200°C, 250°C and 300°C) was applied. Sn-doped CdTe thin films on a glass substrate at a different substrate temperature were made using CdTe and SnTe powders by the thermal evaporation technique. The structural properties of the prepared samples were determined using Raman, x-Ray Diffraction. Spectroscopic ellipsometry and spectrophotometric measurements were conducted to extract the optical constants as a function of substrate temperature. The structural properties of the grown films show hexagonal and cubic mixed structures and phase change has been reported. Scanning electron microscopy (SEM) reviled that a homogenous with a bigger grain size was obtained at 250°C substrate temperature. The conductivity measurements were recorded as a function of substrate temperatures. The open-circuit voltage was improved by controlling the substrate temperature due to the improvement of the fundamental material issues such as recombination and low carrier concentration. All the result was explained and discussed on the biases of the influences of the Sn dopant and the substrate temperature on the structural, optical and photovoltaic characteristics.

Keywords: CdTe, conductivity, photovoltaic, ellipsometry

Procedia PDF Downloads 133
11841 Design of a Laboratory Test for InvestigatingPermanent Deformation of Asphalt

Authors: Esmaeil Ahmadinia, Frank Bullen, Ron Ayers

Abstract:

Many concerns have been raised in recent years about the adequacy of existing creep test methods for evaluating rut-resistance of asphalt mixes. Many researchers believe the main reason for the creep tests being unable to duplicate field results is related to a lack of a realistic confinement for laboratory specimens. In-situ asphalt under axle loads is surrounded by a mass of asphalt, which provides stress-strain generated confinement. However, most existing creep tests are largely unconfined in their nature. It has been hypothesised that by providing a degree of confinement, representative of field conditions, in a creep test, it could be possible to establish a better correlation between the field and laboratory. In this study, a new methodology is explored where confinement for asphalt specimens is provided. The proposed methodology is founded on the current Australian test method, adapted to provide simulated field conditions through the provision of sample confinement.

Keywords: asphalt mixture, creep test, confinements, permanent deformation

Procedia PDF Downloads 324
11840 Pushing the Boundary of Parallel Tractability for Ontology Materialization via Boolean Circuits

Authors: Zhangquan Zhou, Guilin Qi

Abstract:

Materialization is an important reasoning service for applications built on the Web Ontology Language (OWL). To make materialization efficient in practice, current research focuses on deciding tractability of an ontology language and designing parallel reasoning algorithms. However, some well-known large-scale ontologies, such as YAGO, have been shown to have good performance for parallel reasoning, but they are expressed in ontology languages that are not parallelly tractable, i.e., the reasoning is inherently sequential in the worst case. This motivates us to study the problem of parallel tractability of ontology materialization from a theoretical perspective. That is we aim to identify the ontologies for which materialization is parallelly tractable, i.e., in the NC complexity. Since the NC complexity is defined based on Boolean circuit that is widely used to investigate parallel computing problems, we first transform the problem of materialization to evaluation of Boolean circuits, and then study the problem of parallel tractability based on circuits. In this work, we focus on datalog rewritable ontology languages. We use Boolean circuits to identify two classes of datalog rewritable ontologies (called parallelly tractable classes) such that materialization over them is parallelly tractable. We further investigate the parallel tractability of materialization of a datalog rewritable OWL fragment DHL (Description Horn Logic). Based on the above results, we analyze real-world datasets and show that many ontologies expressed in DHL belong to the parallelly tractable classes.

Keywords: ontology materialization, parallel reasoning, datalog, Boolean circuit

Procedia PDF Downloads 271
11839 Employing a Knime-based and Open-source Tools to Identify AMI and VER Metabolites from UPLC-MS Data

Authors: Nouf Alourfi

Abstract:

This study examines the metabolism of amitriptyline (AMI) and verapamil (VER) using a KNIME-based method. KNIME improved workflow is an open-source data-analytics platform that integrates a number of open-source metabolomics tools such as CFMID and MetFrag to provide standard data visualisations, predict candidate metabolites, assess them against experimental data, and produce reports on identified metabolites. The use of this workflow is demonstrated by employing three types of liver microsomes (human, rat, and Guinea pig) to study the in vitro metabolism of the two drugs (AMI and VER). This workflow is used to create and treat UPLC-MS (Orbitrap) data. The formulas and structures of these drugs' metabolites can be assigned automatically. The key metabolic routes for amitriptyline are hydroxylation, N-dealkylation, N-oxidation, and conjugation, while N-demethylation, O-demethylation and N-dealkylation, and conjugation are the primary metabolic routes for verapamil. The identified metabolites are compatible to the published, clarifying the solidity of the workflow technique and the usage of computational tools like KNIME in supporting the integration and interoperability of emerging novel software packages in the metabolomics area.

Keywords: KNIME, CFMID, MetFrag, Data Analysis, Metabolomics

Procedia PDF Downloads 121
11838 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 134
11837 Ageing Deterioration of High-Density Polyethylene Cable Spacer under Salt Water Dip Wheel Test

Authors: P. Kaewchanthuek, R. Rawonghad, B. Marungsri

Abstract:

This paper presents the experimental results of high-density polyethylene cable spacers for 22 kV distribution systems under salt water dip wheel test based on IEC 62217. The strength of anti-tracking and anti-erosion of cable spacer surface was studied in this study. During the test, dry band arc and corona discharge were observed on cable spacer surface. After 30,000 cycles of salt water dip wheel test, obviously surface erosion and tracking were observed especially on the ground end. Chemical analysis results by fourier transforms infrared spectroscopy showed chemical changed from oxidation and carbonization reaction on tested cable spacer. Increasing of C=O and C=C bonds confirmed occurrence of these reactions.

Keywords: cable spacer, HDPE, ageing of cable spacer, salt water dip wheel test

Procedia PDF Downloads 381
11836 Genetic Association of SIX6 Gene with Pathogenesis of Glaucoma

Authors: Riffat Iqbal, Sidra Ihsan, Andleeb Batool, Maryam Mukhtar

Abstract:

Glaucoma is a gathering of optic neuropathies described by dynamic degeneration of retinal ganglionic cells. It is clinically and innately heterogenous illness containing a couple of particular forms each with various causes and severities. Primary open-angle glaucoma (POAG) is the most generally perceived kind of glaucoma. This study investigated the genetic association of single nucleotide polymorphisms (SNPs; rs10483727 and rs33912345) at the SIX1/SIX6 locus with primary open-angle glaucoma (POAG) in the Pakistani population. The SIX6 gene plays an important role in ocular development and has been associated with morphology of the optic nerve. A total of 100 patients clinically diagnosed with glaucoma and 100 control individuals of age over 40 were enrolled in the study. Genomic DNA was extracted by organic extraction method. The SNP genotyping was done by (i) PCR based restriction fragment length polymorphism (RFLP) and sequencing method. Significant genetic associations were observed for rs10483727 (risk allele T) and rs33912345 (risk allele C) with POAG. Hence, it was concluded that Six6 gene is genetically associated with pathogenesis of Glaucoma in Pakistan.

Keywords: genotyping, Pakistani population, primary open-angle glaucoma, SIX6 gene

Procedia PDF Downloads 187
11835 Determining Optimal Number of Trees in Random Forests

Authors: Songul Cinaroglu

Abstract:

Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.

Keywords: classification methods, decision trees, number of trees, random forest

Procedia PDF Downloads 396
11834 Lifetime Assessment for Test Strips of POCT Device through Accelerated Degradation Test

Authors: Jinyoung Choi, Sunmook Lee

Abstract:

In general, single parameter, i.e. temperature, as an accelerating parameter is used to assess the accelerated stability of Point-of-Care Testing (POCT) diagnostic devices. However, humidity also plays an important role in deteriorating the strip performance since major components of test strips are proteins such as enzymes. 4 different Temp./Humi. Conditions were used to assess the lifetime of strips. Degradation of test strips were studied through the accelerated stability test and the lifetime was assessed using commercial POCT products. The life distribution of strips, which were obtained by monitoring the failure time of test strip under each stress condition, revealed that the weibull distribution was the most proper distribution describing the life distribution of strips used in the present study. Equal shape parameters were calculated to be 0.9395 and 0.9132 for low and high concentrations, respectively. The lifetime prediction was made by adopting Peck Eq. Model for Stress-Life relationship, and the B10 life was calculated to be 70.09 and 46.65 hrs for low and high concentrations, respectively.

Keywords: accelerated degradation, diagnostic device, lifetime assessment, POCT

Procedia PDF Downloads 416
11833 Investigation of Wind Farm Interaction with Ethiopian Electric Power’s Grid: A Case Study at Ashegoda Wind Farm

Authors: Fikremariam Beyene, Getachew Bekele

Abstract:

Ethiopia is currently on the move with various projects to raise the amount of power generated in the country. The progress observed in recent years indicates this fact clearly and indisputably. The rural electrification program, the modernization of the power transmission system, the development of wind farm is some of the main accomplishments worth mentioning. As it is well known, currently, wind power is globally embraced as one of the most important sources of energy mainly for its environmentally friendly characteristics, and also that once it is installed, it is a source available free of charge. However, integration of wind power plant with an existing network has many challenges that need to be given serious attention. In Ethiopia, a number of wind farms are either installed or are under construction. A series of wind farm is planned to be installed in the near future. Ashegoda Wind farm (13.2°, 39.6°), which is the subject of this study, is the first large scale wind farm under construction with the capacity of 120 MW. The first phase of 120 MW (30 MW) has been completed and is expected to be connected to the grid soon. This paper is concerned with the investigation of the wind farm interaction with the national grid under transient operating condition. The main concern is the fault ride through (FRT) capability of the system when the grid voltage drops to exceedingly low values because of short circuit fault and also the active and reactive power behavior of wind turbines after the fault is cleared. On the wind turbine side, a detailed dynamic modelling of variable speed wind turbine of a 1 MW capacity running with a squirrel cage induction generator and full-scale power electronics converters is done and analyzed using simulation software DIgSILENT PowerFactory. On the Ethiopian electric power corporation side, after having collected sufficient data for the analysis, the grid network is modeled. In the model, a fault ride-through (FRT) capability of the plant is studied by applying 3-phase short circuit on the grid terminal near the wind farm. The results show that the Ashegoda wind farm can ride from voltage deep within a short time and the active and reactive power performance of the wind farm is also promising.

Keywords: squirrel cage induction generator, active and reactive power, DIgSILENT PowerFactory, fault ride-through capability, 3-phase short circuit

Procedia PDF Downloads 176
11832 Creation of a Test Machine for the Scientific Investigation of Chain Shot

Authors: Mark McGuire, Eric Shannon, John Parmigiani

Abstract:

Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.

Keywords: chain shot, safety, testing, timber harvesters

Procedia PDF Downloads 153
11831 An Experiment Research on the Effect of Brain-Break in the Classroom on Elementary School Students’ Selective Attention

Authors: Hui Liu, Xiaozan Wang, Jiarong Zhong, Ziming Shao

Abstract:

Introduction: Related research shows that students don’t concentrate on teacher’s speaking in the classroom. The d2 attention test is a time-limited test about selective attention. The d2 attention test can be used to evaluate individual selective attention. Purpose: To use the d2 attention test tool to measure the difference between the attention level of the experimental class and the control class before and after Brain-Break and to explore the effect of Brain-Break in the classroom on students' selective attention. Methods: According to the principle of no difference in pre-test data, two classes in the fourth- grade of Shenzhen Longhua Central Primary School were selected. After 20 minutes of class in the third class in the morning and the third class in the afternoon, about 3-minute Brain-Break intervention was performed in the experimental class for 10 weeks. The normal class in the control class did not intervene. Before and after the experiment, the d2 attention test tool was used to test the attention level of the two-class students. The paired sample t-test and independent sample t-test in SPSS 23.0 was used to test the change in the attention level of the two-class classes around 10 weeks. This article only presents results with significant differences. Results: The independent sample t-test results showed that after ten-week of Brain-Break, the missed errors (E1 t = -2.165 p = 0.042), concentration performance (CP t = 1.866 p = 0.05), and the degree of omissions (Epercent t = -2.375 p = 0.029) in experimental class showed significant differences compared with control class. The students’ error level decreased and the concentration increased. Conclusions: Adding Brain-Break interventions in the classroom can effectively improve the attention level of fourth-grade primary school students to a certain extent, especially can improve the concentration of attention and decrease the error rate in the tasks. The new sport's learning model is worth promoting

Keywords: cultural class, micromotor, attention, D2 test

Procedia PDF Downloads 134
11830 [Keynote Talk]: Knowledge Codification and Innovation Success within Digital Platforms

Authors: Wissal Ben Arfi, Lubica Hikkerova, Jean-Michel Sahut

Abstract:

This study examines interfirm networks in the digital transformation era, and in particular, how tacit knowledge codification affects innovation success within digital platforms. Hence, one of the most important features of digital transformation and innovation process outcomes is the emergence of digital platforms, as an interfirm network, at the heart of open innovation. This research aims to illuminate how digital platforms influence inter-organizational innovation through virtual team interactions and knowledge sharing practices within an interfirm network. Consequently, it contributes to the respective strategic management literature on new product development (NPD), open innovation, industrial management, and its emerging interfirm networks’ management. The empirical findings show, on the one hand, that knowledge conversion may be enhanced, especially by the socialization which seems to be the most important phase as it has played a crucial role to hold the virtual team members together. On the other hand, in the process of socialization, the tacit knowledge codification is crucial because it provides the structure needed for the interfirm network actors to interact and act to reach common goals which favor the emergence of open innovation. Finally, our results offer several conditions necessary, but not always sufficient, for interfirm managers involved in NPD and innovation concerning strategies to increasingly shape interconnected and borderless markets and business collaborations. In the digital transformation era, the need for adaptive and innovative business models as well as new and flexible network forms is becoming more significant than ever. Supported by technological advancements and digital platforms, companies could benefit from increased market opportunities and creating new markets for their innovations through alliances and collaborative strategies, as a mode of reducing or eliminating uncertainty environments or entry barriers. Consequently, an efficient and well-structured interfirm network is essential to create network capabilities, to ensure tacit knowledge sharing, to enhance organizational learning and to foster open innovation success within digital platforms.

Keywords: interfirm networks, digital platform, virtual teams, open innovation, knowledge sharing

Procedia PDF Downloads 133
11829 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics

Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood

Abstract:

We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.

Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka

Procedia PDF Downloads 394
11828 Component Based Testing Using Clustering and Support Vector Machine

Authors: Iqbaldeep Kaur, Amarjeet Kaur

Abstract:

Software Reusability is important part of software development. So component based software development in case of software testing has gained a lot of practical importance in the field of software engineering from academic researcher and also from software development industry perspective. Finding test cases for efficient reuse of test cases is one of the important problems aimed by researcher. Clustering reduce the search space, reuse test cases by grouping similar entities according to requirements ensuring reduced time complexity as it reduce the search time for retrieval the test cases. In this research paper we proposed approach for re-usability of test cases by unsupervised approach. In unsupervised learning we proposed k-mean and Support Vector Machine. We have designed the algorithm for requirement and test case document clustering according to its tf-idf vector space and the output is set of highly cohesive pattern groups.

Keywords: software testing, reusability, clustering, k-mean, SVM

Procedia PDF Downloads 431
11827 A Meaning-Making Approach to Understand the Relationship between the Physical Built Environment of the Heritage Sites including the Intangible Values and the Design Development of the Public Open Spaces: Case Study Liverpool Pier Head

Authors: May Newisar, Richard Kingston, Philip Black

Abstract:

Heritage-led regeneration developments have been considered as one of the cornerstones of the economic and social revival of historic towns and cities in the UK. However, this approach has proved its deficiency within the development of Liverpool World Heritage site. This is due to the conflict between sustaining the tangible and intangible values as well as achieving the aimed economic developments. Accordingly, the development of such areas is influenced by a top-down approach which considers heritage as consumable experience and urban regeneration as the economic development for it. This neglects the heritage sites characteristics and values as well as the design criteria for public open spaces that overlap with the heritage sites. Currently, knowledge regarding the relationship between the physical built environment of the heritage sites including the intangible values and the design development of the public open spaces is limited. Public open spaces have been studied from different perspectives such as increasing walkability, a source of social cohesion, provide a good quality of life as well as understanding users’ perception. While heritage sites have been discussed heavily on how to maintain the physical environment, understanding the courses of threats and how to be protected. In addition to users’ experiences and motivations of visiting such areas. Furthermore, new approaches tried to overcome the gap such as the historic urban landscape approach. This approach is focusing on the entire human environment with all its tangible and intangible qualities. However, this research aims to understand the relationship between the heritage sites and public open spaces and how the overlap of the design and development of both could be used as a quality to enhance the heritage sites and improve users’ experience. A meaning-making approach will be used in order to understand and articulate how the development of Liverpool World Heritage site and its value could influence and shape the design of public open space Pier Head in order to attract a different level of tourists to be used as a tool for economic development. Consequently, this will help in bridging the gap between the planning and conservation areas’ policies through an understanding of how flexible is the system in order to adopt alternative approaches for the design and development strategies for those areas.

Keywords: historic urban landscape, environmental psychology, urban governance, identity

Procedia PDF Downloads 133
11826 Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud

Authors: Yoji Yamato

Abstract:

In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.

Keywords: OpenStack, cloud computing, automatic verification, jenkins

Procedia PDF Downloads 491
11825 A Memristive Device with Intrinsic Rectification Behavior and Performace of Crossbar Arrays

Authors: Yansong Gao, Damith C.Ranasinghe, Siad F. Al-Sarawi, Omid Kavehei, Derek Abbott

Abstract:

Passive crossbar arrays is in principle the simplest functional electrical circuit, together with memristive device in cross-point, holding great promise in future high-density, non-volatile memories. However, the greatest problem of crossbar array is the sneak path current. In this paper, we investigate one type of memristive device with intrinsic rectification behavior to address the sneak path currents. Firstly, a SPICE behavior model written in Verilog-A language of the memristive device is presented to fit experimental data published in literature. Next, systematic performance simulations including read margin and power consumption of crossbar array, which uses the self-rectifying memristive device as storage element at cross-point, with respect to different crossbar sizes, interconnect resistance, ratio of HRS/LRS (High Resistance State/ Low Resistance State), rectification ratio and different read schemes are conducted. Subsequently, Trade-offs among reading margin, power consumption, and reading schemes are analyzed to provide guidelines for circuit design. Finally, performance comparison between the memristive device with/without intrinsic rectification behavior is given to show the worthiness of this intrinsic rectification behavior.

Keywords: memristive device, memristor, crossbar, RRAM, read margin, power consumption

Procedia PDF Downloads 436
11824 2 Stage CMOS Regulated Cascode Distributed Amplifier Design Based On Inductive Coupling Technique in Submicron CMOS Process

Authors: Kittipong Tripetch, Nobuhiko Nakano

Abstract:

This paper proposes one stage and two stage CMOS Complementary Regulated Cascode Distributed Amplifier (CRCDA) design based on Inductive and Transformer coupling techniques. Usually, Distributed amplifier is based on inductor coupling between gate and gate of MOSFET and between drain and drain of MOSFET. But this paper propose some new idea, by coupling with differential primary windings of transformer between gate and gate of MOSFET first stage and second stage of regulated cascade amplifier and by coupling with differential secondary windings transformer of MOSFET between drain and drain of MOSFET first stage and second stage of regulated cascade amplifier. This paper also proposes polynomial modeling of Silicon Transformer passive equivalent circuit from Nanyang Technological University which is used to extract frequency response of transformer. Cadence simulation results are used to verify validity of transformer polynomial modeling which can be used to design distributed amplifier without Cadence. 4 parameters of scattering matrix of 2 port of the propose circuit is derived as a function of 4 parameters of impedance matrix.

Keywords: CMOS regulated cascode distributed amplifier, silicon transformer modeling with polynomial, low power consumption, distribute amplification technique

Procedia PDF Downloads 513
11823 Novel Poly Schiff Bases as Corrosion Inhibitors for Carbon Steel in Sour Petroleum Conditions

Authors: Shimaa A. Higazy, Olfat E. El-Azabawy, Ahmed M. Al-Sabagh, Notaila M. Nasser, Eman A. Khamis

Abstract:

In this work, two novel Schiff base polymers (PSB1 and PSB₂) with extra-high protective barrier features were facilely prepared via Polycondensation reactions. They were applied for the first time as effective corrosion inhibitors in the sour corrosive media of petroleum environments containing hydrogen sulfide (H₂S) gas. For studying the polymers' inhibitive action on the carbon steel, numerous corrosion testing methods including potentiodynamic polarization (PDP), open circuit potential, and electrochemical impedance spectroscopy (EIS) have been employed at various temperatures (298-328 K) in the oil wells formation water with H₂S concentrations of 100, 400, and 700 ppm as aggressive media. The activation energy (Ea) and other thermodynamic parameters were computed to describe the mechanism of adsorption. The corrosion morphological traits and steel samples' surfaces composition were analyzed by field emission scanning electron microscope and energy dispersive X-ray analysis. The PSB2 inhibited sour corrosion more effectively than PSB1 when subjected to electrochemical testing. The 100 ppm concentration of PSB2 exhibited 82.18 % and 81.14 % inhibition efficiencies at 298 K in PDP and EIS measurements, respectively. While at 328 K, the inhibition efficiencies were 61.85 % and 67.4 % at the same dosage and measurements. These poly Schiff bases exhibited fascinating performance as corrosion inhibitors in sour environment. They provide a great corrosion inhibition platform for the sustainable future environment.

Keywords: schiff base polymers, corrosion inhibitors, sour corrosive media, potentiodynamic polarization, H₂S concentrations

Procedia PDF Downloads 102
11822 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 393
11821 Reproductive Performance of Dairy Cows at Different Parities: A Case Study in Enrekang Regency, Indonesia

Authors: Muhammad Yusuf, Abdul Latief Toleng, Djoni Prawira Rahardja, Ambo Ako, Sahiruddin Sahiruddin, Abdi Eriansyah

Abstract:

The objective of this study was to know the reproductive performance of dairy cows at different parities. A total of 60 dairy Holstein-Friesian cows with parity one to three from five small farms raised by the farmers were used in the study. All cows were confined in tie stall barn with rubber on the concrete floor. The herds were visited twice for survey with the help of a questionnaire. Reproductive parameters used in the study were days open, calving interval, and service per conception (S/C). The results of this study showed that the mean (±SD) days open of the cows in parity 2 was slightly longer than those in parity 3 (228.2±121.5 vs. 205.5±144.5; P=0.061). None cows conceived within 85 days postpartum in parity 3 in comparison to 13.8% cows conceived in parity 2. However, total cows conceived within 150 days post partum in parity 2 and parity 3 were 30.1% and 36.4%, respectively. Likewise, after reaching 210 days after calving, number of cows conceived in parity 3 had higher than number of cows in parity 2 (72.8% vs. 44.8%; P<0.05). The mean (±SD) calving interval of the cows in parity 2 and parity 3 were 508.2±121.5 and 495.5±144.1, respectively. Number of cows with calving interval of 400 and 450 days in parity 3 was higher than those cows in parity 2 (23.1% vs. 17.2% and 53.9% vs. 31.0%). Cows in parity 1 had significantly (P<0.01) lower number of S/C in comparison to the cows with parity 2 and parity 3 (1.6±1.2 vs. 3.5±3.4 and 3.3±2.1). It can be concluded that reproductive performance of the cows is affected by different parities.

Keywords: dairy cows, parity, days open, calving interval, service per conception

Procedia PDF Downloads 258
11820 Innovating and Disrupting Higher Education: The Evolution of Massive Open Online Courses

Authors: Nabil Sultan

Abstract:

A great deal has been written on Massive Open Online Courses (MOOCs) since 2012 (considered by some as the year of the MOOCs). The emergence of MOOCs caused a great deal of interest amongst academics and technology experts as well as ordinary people. Some of the authors who wrote on MOOCs perceived it as the next big thing that will disrupt education. Other authors saw it as another fad that will go away once it ran its course (as most fads often do). But MOOCs did not turn out to be a fad and it is still around. Most importantly, they evolved into something that is beginning to look like a viable business model. This paper explores this phenomenon within the theoretical frameworks of disruptive innovations and jobs to be done as developed by Clayton Christensen and his colleagues and its implications for the future of higher education (HE).

Keywords: MOOCs, disruptive innovations, higher education, jobs theory

Procedia PDF Downloads 271
11819 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City

Authors: Christian Kapuku, Seung-Young Kho

Abstract:

An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.

Keywords: geographic information system (GIS), network construction, transportation database, open source data

Procedia PDF Downloads 168
11818 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 388
11817 Experimental Evaluation of UDP in Wireless LAN

Authors: Omar Imhemed Alramli

Abstract:

As Transmission Control Protocol (TCP), User Datagram Protocol (UDP) is transfer protocol in the transportation layer in Open Systems Interconnection model (OSI model) or in TCP/IP model of networks. The UDP aspects evaluation were not recognized by using the pcattcp tool on the windows operating system platform like TCP. The study has been carried out to find a tool which supports UDP aspects evolution. After the information collection about different tools, iperf tool was chosen and implemented on Cygwin tool which is installed on both Windows XP platform and also on Windows XP on virtual box machine on one computer only. Iperf is used to make experimental evaluation of UDP and to see what will happen during the sending the packets between the Host and Guest in wired and wireless networks. Many test scenarios have been done and the major UDP aspects such as jitter, packet losses, and throughput are evaluated.

Keywords: TCP, UDP, IPERF, wireless LAN

Procedia PDF Downloads 357
11816 Orthodontic Treatment Using CAD/CAM System

Authors: Cristiane C. B. Alves, Livia Eisler, Gustavo Mota, Kurt Faltin Jr., Cristina L. F. Ortolani

Abstract:

The correct positioning of the brackets is essential for the success of orthodontic treatment. Indirect bracket placing technique has the main objective of eliminating the positioning errors, which commonly occur in the technique of direct system of brackets. The objective of this study is to demonstrate that the exact positioning of the brackets is of extreme relevance for the success of the treatment. The present work shows a case report of an adult female patient who attended the clinic with the complaint of being in orthodontic treatment for more than 5 years without noticing any progress. As a result of the intra-oral clinical examination and documentation analysis, a class III malocclusion, an anterior open bite, and absence of all third molars and first upper and lower bilateral premolars were observed. For the treatment, the indirect bonding technique with self-ligating ceramic braces was applied. The preparation of the trays was done after the intraoral digital scanning and printing of models with a 3D printer. Brackets were positioned virtually, using a specialized software. After twelve months of treatment, correction of the malocclusion was observed, as well as the closing of the anterior open bite. It is concluded that the adequate and precise positioning of brackets is necessary for a successful treatment.

Keywords: anterior open-bite, CAD/CAM, orthodontics, malocclusion, angle class III

Procedia PDF Downloads 195