Search results for: maximal data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25280

Search results for: maximal data sets

24020 Motif Search-Aided Screening of the Pseudomonas syringae pv. Maculicola Genome for Genes Encoding Tertiary Alcohol Ester Hydrolases

Authors: M. L. Mangena, N. Mokoena, K. Rashamuse, M. G. Tlou

Abstract:

Tertiary alcohol ester (TAE) hydrolases are a group of esterases (EC 3.1.1.-) that catalyze the kinetic resolution of TAEs and as a result, they are sought-after for the production of optically pure tertiary alcohols (TAs) which are useful as building blocks for number biologically active compounds. What sets these enzymes apart is, the presence of a GGG(A)X-motif in the active site which appears to be the main reason behind their activity towards the sterically demanding TAEs. The genome of Pseudomonas syringae pv. maculicola (Psm) comprises a multitude of genes that encode esterases. We therefore, hypothesize that some of these genes encode TAE hydrolases. In this study, Psm was screened for TAE hydrolase activity using the linalyl acetate (LA) plate assay and a positive reaction was observed. As a result, the genome of Psm was screened for esterases with a GGG(A)X-motif using the motif search tool and two potential TAE hydrolase genes (PsmEST1 and 2, 1100 and 1000bp, respectively) were identified, PsmEST1 was amplified by PCR and the gene sequenced for confirmation. Analysis of the sequence data with the SingnalP 4.1 server revealed that the protein comprises a signal peptide (22 amino acid residues) on the N-terminus. Primers specific for the gene encoding the mature protein (without the signal peptide) were designed such that they contain NdeI and XhoI restriction sites for directional cloning of the PCR products into pET28a. The gene was expressed in E. coli JM109 (DE3) and the clones screened for TAE hydrolase activity using the LA plate assay. A positive clone was selected, overexpressed and the protein purified using nickel affinity chromatography. The activity of the esterase towards LA was confirmed using thin layer chromatography.

Keywords: hydrolases, tertiary alcohol esters, tertiary alcohols, screening, Pseudomonas syringae pv., maculicola genome, esterase activity, linalyl acetate

Procedia PDF Downloads 342
24019 The Prospective Assessment of Zero-Energy Dwellings

Authors: Jovana Dj. Jovanovic, Svetlana M. Stevovic

Abstract:

The highest priority of so called, projected passive houses is to meet the appropriate energy demand. Every single material and layer which is injected into a dwelling has a certain energy quantity stored. The passive houses include optimized insulation levels with minimal thermal bridges, minimum of air leakage through the building, utilization of passive solar and internal gains, and good circulation of air which leans on mechanical ventilation system. The focus of this paper is on passive house features, benefits and targets, their feasibility and energy demands which are set up during each project. Numerous passive house-standards outline the very significant role of zero-energy dwellings towards the modern label of sustainable development. It is clear that the performance of both built and existing housing stock must be addressed if the population across the world sets out the energy objectives. This scientific article examines passive house features since the many passive house cases are launched.

Keywords: benefits, energy demands, passive houses, sustainable development

Procedia PDF Downloads 317
24018 Decentralized Control of Interconnected Systems with Non-Linear Unknown Interconnections

Authors: Haci Mehmet Guzey, Levent Acar

Abstract:

In this paper, a novel decentralized controller is developed for linear systems with nonlinear unknown interconnections. A model linear decoupled system is assigned for each system. By using the difference actual and model state dynamics, the problem is formulated as inverse problem. Then, the interconnected dynamics are approximated by using Galerkin’s expansion method for inverse problems. Two different sets of orthogonal basis functions are utilized to approximate the interconnected dynamics. Approximated interconnections are utilized in the controller to cancel the interconnections and decouple the systems. Subsequently, the interconnected systems behave as a collection of decoupled systems.

Keywords: decentralized control, inverse problems, large scale systems, nonlinear interconnections, basis functions, system identification

Procedia PDF Downloads 522
24017 Determination of the Phosphate Activated Glutaminase Localization in the Astrocyte Mitochondria Using Kinetic Approach

Authors: N. V. Kazmiruk, Y. R. Nartsissov

Abstract:

Phosphate activated glutaminase (GA, E.C. 3.5.1.2) plays a key role in glutamine/glutamate homeostasis in mammalian brain, catalyzing the hydrolytic deamidation of glutamine to glutamate and ammonium ions. GA is mainly localized in mitochondria, where it has the catalytically active form on the inner mitochondrial membrane (IMM) and the other soluble form, which is supposed to be dormant. At present time, the exact localization of the membrane glutaminase active site remains a controversial and an unresolved issue. The first hypothesis called c-side localization suggests that the catalytic site of GA faces the inter-membrane space and products of the deamidation reaction have immediate access to cytosolic metabolism. According to the alternative m-side localization hypothesis, GA orients to the matrix, making glutamate and ammonium available for the tricarboxylic acid cycle metabolism in mitochondria directly. In our study, we used a multi-compartment kinetic approach to simulate metabolism of glutamate and glutamine in the astrocytic cytosol and mitochondria. We used physiologically important ratio between the concentrations of glutamine inside the matrix of mitochondria [Glnₘᵢₜ] and glutamine in the cytosol [Glncyt] as a marker for precise functioning of the system. Since this ratio directly depends on the mitochondrial glutamine carrier (MGC) flow parameters, key observation was to investigate the dependence of the [Glnmit]/[Glncyt] ratio on the maximal velocity of MGC at different initial concentrations of mitochondrial glutamate. Another important task was to observe the similar dependence at different inhibition constants of the soluble GA. The simulation results confirmed the experimental c-side localization hypothesis, in which the glutaminase active site faces the outer surface of the IMM. Moreover, in the case of such localization of the enzyme, a 3-fold decrease in ammonium production was predicted.

Keywords: glutamate metabolism, glutaminase, kinetic approach, mitochondrial membrane, multi-compartment modeling

Procedia PDF Downloads 106
24016 Analyzing Tools and Techniques for Classification In Educational Data Mining: A Survey

Authors: D. I. George Amalarethinam, A. Emima

Abstract:

Educational Data Mining (EDM) is one of the newest topics to emerge in recent years, and it is concerned with developing methods for analyzing various types of data gathered from the educational circle. EDM methods and techniques with machine learning algorithms are used to extract meaningful and usable information from huge databases. For scientists and researchers, realistic applications of Machine Learning in the EDM sectors offer new frontiers and present new problems. One of the most important research areas in EDM is predicting student success. The prediction algorithms and techniques must be developed to forecast students' performance, which aids the tutor, institution to boost the level of student’s performance. This paper examines various classification techniques in prediction methods and data mining tools used in EDM.

Keywords: classification technique, data mining, EDM methods, prediction methods

Procedia PDF Downloads 107
24015 Cadmium and Lead Extraction from Environmental Samples with Complexes Matrix by Nanomagnetite Solid-Phase and Determine Their Trace Amounts

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

In this study, a new type of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) with sodium dodecyl sulfate- 1-(2-pyridylazo)-2-naphthol (SDS-PAN) as a new sorbent solid phase extraction (SPE) has been successfully synthesized and applied for preconcentration and separation of Cd and Pb in environmental samples. Compared with conventional SPE methods, the advantages of this new magnetic Mixed Hemimicelles Solid-Phase Extraction Procedure (MMHSPE) still include easy preparation and regeneration of sorbents, short times of sample pretreatment, high extraction yields, and high breakthrough volumes. It shows great analytical potential in preconcentration of Cd and Pb compounds from large volume water samples. Due to the high surface area of these new sorbents and the excellent adsorption capacity after surface modification by SDS-PAN, satisfactory concentration factor and extraction recoveries can be produced with only 0.05 g Fe3O4/Al2O3 NPs. The metals were eluted with 3mL HNO3 2 mol L-1 directly and detected with the detection system Flame Atomic Absorption Spectrometry (FAAS). Various influencing parameters on the separation and preconcentration of trace metals, such as the amount of PAN, pH value, sample volume, standing time, desorption solvent and maximal extraction volume, amount of sorbent and concentration of eluent, were studied. The detection limits of this method for Cd and Pb were 0.3 and 0.7 ng mL−1 and the R.S.D.s were 3.4 and 2.8% (C = 28.00 ng mL-1, n = 6), respectively. The preconcentration factor of the modified nanoparticles was 166.6. The proposed method has been applied to the determination of these metal ions at trace levels in soil, river, tap, mineral, spring and wastewater samples with satisfactory results.

Keywords: Alumina-coated magnetite nanoparticles, Magnetic Mixed Hemimicell Solid-Phase Extraction, Cd and Pb, soil sample

Procedia PDF Downloads 304
24014 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology

Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar

Abstract:

Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.

Keywords: data privacy, distributed system, federated learning, machine learning

Procedia PDF Downloads 102
24013 Empowering Teachers to Bolster Vocational Education in Cameroon

Authors: Ambissah Asah Brigitte

Abstract:

This research is guided by observations in the types of education offered at the secondary level in Cameroon. The secondary education system in Cameroon comprises two types of education, including General Education and Technical and Vocational Education. Although General Education and, Technical and Vocational Education are given equal importance by public authorities, General Education remains on the thriving trend, enjoying the greatest enrolment. In the meantime, Technical and Vocational Education is still to reach the adequate momentum expected to fostering the country’s full-fledged development, as specified in the National Development Strategy, which is the blue print of State policies in Cameroon for the 2020-2030 decade. Vocational Education is credited for its ability to foster a country’s development, since it teaches students the precise skills and knowledge needed to carry out a specific craft, technical skill or trade. Yet, formal training on Vocational Education for teachers offers a pale face in secondary education. This limits the ability of the educational system to nurture vocations and provide the country’s economy with the manpower necessary to achieving development goals. This article seeks to analyse how concretely does the institutional framework spur vocational skills in secondary school teachers. It overviews the instruments instituting Vocational Education at the secondary level in Cameroon, then assesses their effective implementation on the ground. Questionnaires addressed to both active teachers and vocational education policy-makers serve to collect data which are analysed using descriptive statistics. The final objective is to contribute in the debate urging to rethink the role of teachers in bolstering Vocational Education, which is the cornerstone of industrial development. This is true everywhere in the world. In Cameroon and in Africa in general, teachers must be empowered in this field with specific sets of competencies they will need to pass on to learners. They equally need to be given opportunities to acquire and adapt their knowledge and teaching skills accordingly.

Keywords: vocational education, cameroon, institutional framework, national development, competencies and skills

Procedia PDF Downloads 56
24012 A Concept of Data Mining with XML Document

Authors: Akshay Agrawal, Anand K. Srivastava

Abstract:

The increasing amount of XML datasets available to casual users increases the necessity of investigating techniques to extract knowledge from these data. Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi-structured datasets. The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways.

Keywords: XML, similarity measure, clustering, cluster quality, semantic clustering

Procedia PDF Downloads 360
24011 Speed-Up Data Transmission by Using Bluetooth Module on Gas Sensor Node of Arduino Board

Authors: Hiesik Kim, YongBeum Kim

Abstract:

Internet of Things (IoT) applications are widely serviced and spread worldwide. Local wireless data transmission technique must be developed to speed up with some technique. Bluetooth wireless data communication is wireless technique is technique made by Special Inter Group(SIG) using the frequency range 2.4 GHz, and it is exploiting Frequency Hopping to avoid collision with different device. To implement experiment, equipment for experiment transmitting measured data is made by using Arduino as Open source hardware, Gas sensor, and Bluetooth Module and algorithm controlling transmission speed is demonstrated. Experiment controlling transmission speed also is progressed by developing Android Application receiving measured data, and controlling this speed is available at the experiment result. it is important that in the future, improvement for communication algorithm be needed because few error occurs when data is transferred or received.

Keywords: Arduino, Bluetooth, gas sensor, internet of things, transmission Speed

Procedia PDF Downloads 470
24010 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors

Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills

Abstract:

The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.

Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO

Procedia PDF Downloads 452
24009 Evaluating the Total Costs of a Ransomware-Resilient Architecture for Healthcare Systems

Authors: Sreejith Gopinath, Aspen Olmsted

Abstract:

This paper is based on our previous work that proposed a risk-transference-based architecture for healthcare systems to store sensitive data outside the system boundary, rendering the system unattractive to would-be bad actors. This architecture also allows a compromised system to be abandoned and a new system instance spun up in place to ensure business continuity without paying a ransom or engaging with a bad actor. This paper delves into the details of various attacks we simulated against the prototype system. In the paper, we discuss at length the time and computational costs associated with storing and retrieving data in the prototype system, abandoning a compromised system, and setting up a new instance with existing data. Lastly, we simulate some analytical workloads over the data stored in our specialized data storage system and discuss the time and computational costs associated with running analytics over data in a specialized storage system outside the system boundary. In summary, this paper discusses the total costs of data storage, access, and analytics incurred with the proposed architecture.

Keywords: cybersecurity, healthcare, ransomware, resilience, risk transference

Procedia PDF Downloads 121
24008 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations

Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili

Abstract:

Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.

Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance

Procedia PDF Downloads 41
24007 Exploring the Capabilities of Sentinel-1A and Sentinel-2A Data for Landslide Mapping

Authors: Ismayanti Magfirah, Sartohadi Junun, Samodra Guruh

Abstract:

Landslides are one of the most frequent and devastating natural disasters in Indonesia. Many studies have been conducted regarding this phenomenon. However, there is a lack of attention in the landslide inventory mapping. The natural condition (dense forest area) and the limited human and economic resources are some of the major problems in building landslide inventory in Indonesia. Considering the importance of landslide inventory data in susceptibility, hazard, and risk analysis, it is essential to generate landslide inventory based on available resources. In order to achieve this, the first thing we have to do is identify the landslides' location. The presence of Sentinel-1A and Sentinel-2A data gives new insights into land monitoring investigation. The free access, high spatial resolution, and short revisit time, make the data become one of the most trending open sources data used in landslide mapping. Sentinel-1A and Sentinel-2A data have been used broadly for landslide detection and landuse/landcover mapping. This study aims to generate landslide map by integrating Sentinel-1A and Sentinel-2A data use change detection method. The result will be validated by field investigation to make preliminary landslide inventory in the study area.

Keywords: change detection method, landslide inventory mapping, Sentinel-1A, Sentinel-2A

Procedia PDF Downloads 152
24006 A DEA Model in a Multi-Objective Optimization with Fuzzy Environment

Authors: Michael Gidey Gebru

Abstract:

Most DEA models operate in a static environment with input and output parameters that are chosen by deterministic data. However, due to ambiguity brought on shifting market conditions, input and output data are not always precisely gathered in real-world scenarios. Fuzzy numbers can be used to address this kind of ambiguity in input and output data. Therefore, this work aims to expand crisp DEA into DEA with fuzzy environment. In this study, the input and output data are regarded as fuzzy triangular numbers. Then, the DEA model with fuzzy environment is solved using a multi-objective method to gauge the Decision Making Units’ efficiency. Finally, the developed DEA model is illustrated with an application on real data 50 educational institutions.

Keywords: efficiency, DEA, fuzzy, decision making units, higher education institutions

Procedia PDF Downloads 30
24005 Effect of Endurance Exercise Training on Blood Pressure in Elderly Female Patients with Hypertension

Authors: Elham Ahmadi

Abstract:

This study is conducted with the aim of investigating the effect of moderate physical activity (60% of maximal heart rate-MHR) on blood pressure in an elderly female with hypertension. Hypertension is considered a modifiable risk factor for cardiovascular disease through physical activity. The purpose and significance of this study were to investigate the role of exercise as an alternative therapy since some patients exhibit sensitivity/intolerance to some drugs. Initially, 65 hypertensive females (average age = 49.7 years) (systolic blood pressure, SBP >140 mmHg and/or diastolic blood pressure, DBP>85 mmHg) and 25 hypertensive females as a control group (average age = 50.3 years and systolic blood pressure, SBP >140 mmHg and/or diastolic blood pressure, DBP>85 mmHg) were selected. The subjects were divided based on their age, duration of disease, physical activity, and drug consumption. Then, blood pressure and heart rate (HR) were measured in all of the patients using a sphygmomanometer (pre-test). The exercise sessions consisted of warm-up, aerobic activity, and cooling down (total duration of 20 minutes for the first session up to 55 minutes in the last session). At the end of the 12th session (mid-test) and final session (24th session), blood pressure was measured for the last time (post-test). The control group was without any exercise during the study. The results were analyzed using a t-test. Our results indicated that moderate physical activity was effective in lowering blood pressure by 6.4/5.6–mm Hg for SBP and 2.4/4.3mm Hg for DBP in hypertensive patients, irrespective of age, duration of disease, and drug consumption ( P<.005). The control group indicates no changes in BP. Physical activity programs with moderate intensity (approximately at 60% MHR), three days per week, can be used not only as a preventive measure for diastolic hypertension (DBP>90 mmHg high blood pressure) but also as an alternative to drug therapy in the treatment of hypertension, as well.

Keywords: endurance exercise, elderly female, hypertension, physical activity

Procedia PDF Downloads 83
24004 Data-Driven Decision Making: Justification of Not Leaving Class without It

Authors: Denise Hexom, Judith Menoher

Abstract:

Teachers and administrators across America are being asked to use data and hard evidence to inform practice as they begin the task of implementing Common Core State Standards. Yet, the courses they are taking in schools of education are not preparing teachers or principals to understand the data-driven decision making (DDDM) process nor to utilize data in a much more sophisticated fashion. DDDM has been around for quite some time, however, it has only recently become systematically and consistently applied in the field of education. This paper discusses the theoretical framework of DDDM; empirical evidence supporting the effectiveness of DDDM; a process a department in a school of education has utilized to implement DDDM; and recommendations to other schools of education who attempt to implement DDDM in their decision-making processes and in their students’ coursework.

Keywords: data-driven decision making, institute of higher education, special education, continuous improvement

Procedia PDF Downloads 371
24003 Conception of a Predictive Maintenance System for Forest Harvesters from Multiple Data Sources

Authors: Lazlo Fauth, Andreas Ligocki

Abstract:

For cost-effective use of harvesters, expensive repairs and unplanned downtimes must be reduced as far as possible. The predictive detection of failing systems and the calculation of intelligent service intervals, necessary to avoid these factors, require in-depth knowledge of the machines' behavior. Such know-how needs permanent monitoring of the machine state from different technical perspectives. In this paper, three approaches will be presented as they are currently pursued in the publicly funded project PreForst at Ostfalia University of Applied Sciences. These include the intelligent linking of workshop and service data, sensors on the harvester, and a special online hydraulic oil condition monitoring system. Furthermore the paper shows potentials as well as challenges for the use of these data in the conception of a predictive maintenance system.

Keywords: predictive maintenance, condition monitoring, forest harvesting, forest engineering, oil data, hydraulic data

Procedia PDF Downloads 124
24002 Analyzing the Characteristics and Shifting Patterns of Creative Hubs in Bandung

Authors: Fajar Ajie Setiawan, Ratu Azima Mayangsari, Bunga Aprilia

Abstract:

The emergence of creative hubs around the world, including in Bandung, was primarily driven by the needs of collaborative-innovative spaces for creative industry activities such as the Maker Movement and the Coworking Movement. These activities pose challenges for identification and formulation of sets of indicators for modeling creative hubs in Bandung to help stakeholders in formulating strategies. This study intends to identify their characteristics. This research was conducted using a qualitative approach comparing three concepts of creative hub categorization and integrating them into a single instrument to analyze 12 selected creative hubs. Our results showed three new functions of creative hubs in Bandung: (1) cultural, (2) retail business, and (3) community network. Results also suggest that creative hubs in Bandung are commonly established for networking and community activities. Another result shows that there was a shifting pattern of creative hubs before the 2000s and after the 2000s, which also creates a hybrid group of creative hubs.

Keywords: creative industry, creative hubs, Ngariung, Bandung

Procedia PDF Downloads 156
24001 A Tale of Seven Districts: Reviewing The Past, Present and Future of Patent Litigation Filings to Form a Two-Step Burden-Shifting Framework for 28 U.S.C. § 1404(a)

Authors: Timothy T. Hsieh

Abstract:

Current patent venue transfer laws under 28 U.S.C. § 1404(a) e.g., the Gilbert factors from Gulf Oil Corp. v. Gilbert, 330 U.S. 501 (1947) are too malleable in that they often lead to frequent mandamus orders from the U.S. Court of Appeals for the Federal Circuit (“Federal Circuit”) overturning district court rulings on venue transfer motions. Thus, this paper proposes a more robust two-step burden-shifting framework that replaces the eight Gilbert factors. Moreover, a brief history of venue transfer patterns in the seven most active federal patent district courts is covered, with special focus devoted to the venue transfer orders from Judge Alan D Albright of the U.S. District Court for the Western District of Texas. A comprehensive data summary of 45 case sets where the Federal Circuit ruled on writs of mandamus involving Judge Albright’s transfer orders is subsequently provided, with coverage summaries of certain cases including four precedential ones from the Federal Circuit. This proposed two-step burden shifting framework is then applied to these venue transfer cases, as well as Federal Circuit mandamus orders ruling on those decisions. Finally, alternative approaches to remedying the frequent reversals for venue transfer will be discussed, including potential legislative solutions, adjustments to common law framework approaches to venue transfer, deference to the inherent powers of Article III U.S. District Judge, and a unified federal patent district court. Overall, this paper seeks to offer a more robust and consistent three-step burden-shifting framework for venue transfer and for the Federal Circuit to follow in administering mandamus orders, which might change somewhat in light of Western District of Texas Chief Judge Orlando Garcia’s order on redistributing Judge Albright’s patent cases.

Keywords: Patent law, venue, judge Alan Albright, minimum contacts, western district of Texas

Procedia PDF Downloads 85
24000 Sampled-Data Control for Fuel Cell Systems

Authors: H. Y. Jung, Ju H. Park, S. M. Lee

Abstract:

A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.

Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control

Procedia PDF Downloads 556
23999 Further Analysis of Global Robust Stability of Neural Networks with Multiple Time Delays

Authors: Sabri Arik

Abstract:

In this paper, we study the global asymptotic robust stability of delayed neural networks with norm-bounded uncertainties. By employing the Lyapunov stability theory and Homeomorphic mapping theorem, we derive some new types of sufficient conditions ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slopebounded activation functions. An important aspect of our results is their low computational complexity as the reported results can be verified by checking some properties symmetric matrices associated with the uncertainty sets of network parameters. The obtained results are shown to be generalization of some of the previously published corresponding results. Some comparative numerical examples are also constructed to compare our results with some closely related existing literature results.

Keywords: neural networks, delayed systems, lyapunov functionals, stability analysis

Procedia PDF Downloads 511
23998 Behavior of Square Reinforced-Concrete Columns Strenghtened with Carbon Fiber Reinforced Polymers (CFRP) under Concentric Loading

Authors: Dana Abed, Mu`Tasim Abdel-Jaber, Nasim Shatarat

Abstract:

This study aims at investigating the influence of cross-sectional size on axial compressive capacity of carbon fiber reinforced polymer (CFRP) wrapped square reinforced concrete short columns. Three sets of columns were built for this purpose: 200x200x1200 mm; 250x250x1500 mm and 300x300x1800 mm. Each set includes a control column and a strengthened column with one layer of CFRP sheets. All columns were tested under the effect of pure axial compression load. The results of the study show that using CFRP sheets resulted in capacity enhancement of 37%, 32% and 27% for the 200×200, 250×250, and 300×300 mm, respectively. The results of the experimental program demonstrated that the percentage of improvement in strength decreased by increasing the cross-sectional size of the column.

Keywords: CFRP, columns, concentric loading, cross-sectional

Procedia PDF Downloads 271
23997 How Western Donors Allocate Official Development Assistance: New Evidence From a Natural Language Processing Approach

Authors: Daniel Benson, Yundan Gong, Hannah Kirk

Abstract:

Advancement in national language processing techniques has led to increased data processing speeds, and reduced the need for cumbersome, manual data processing that is often required when processing data from multilateral organizations for specific purposes. As such, using named entity recognition (NER) modeling and the Organisation of Economically Developed Countries (OECD) Creditor Reporting System database, we present the first geotagged dataset of OECD donor Official Development Assistance (ODA) projects on a global, subnational basis. Our resulting data contains 52,086 ODA projects geocoded to subnational locations across 115 countries, worth a combined $87.9bn. This represents the first global, OECD donor ODA project database with geocoded projects. We use this new data to revisit old questions of how ‘well’ donors allocate ODA to the developing world. This understanding is imperative for policymakers seeking to improve ODA effectiveness.

Keywords: international aid, geocoding, subnational data, natural language processing, machine learning

Procedia PDF Downloads 55
23996 Optimisation of Dyes Decolourisation by Bacillus aryabhattai

Authors: A. Paz, S. Cortés Diéguez, J. M. Cruz, A. B. Moldes, J. M. Domínguez

Abstract:

Synthetic dyes are extensively used in the paper, food, leather, cosmetics, pharmaceutical and textile industries. Wastewater resulting from their production means several environmental problems. Improper disposal of theirs effluents involves adverse impacts and not only about the colour, also on water quality (Total Organic Carbon, Biological Oxygen Demand, Chemical Oxygen Demand, suspended solids, salinity, etc.) on flora (inhibition of photosynthetic activity), fauna (toxic, carcinogenic, and mutagenic effects) and human health. The aim of this work is to optimize the decolourisation process of different types of dyes by Bacillus aryabhattai. Initially, different types of dyes (Indigo Carmine, Coomassie Brilliant Blue and Remazol Brilliant Blue R) and suitable culture media (Nutritive Broth, Luria Bertani Broth and Trypticasein Soy Broth) were selected. Then, a central composite design (CCD) was employed to optimise and analyse the significance of each abiotic parameter. Three process variables (temperature, salt concentration and agitation) were investigated in the CCD at 3 levels with 2-star points. A total of 23 experiments were carried out according to a full factorial design, consisting of 8 factorial experiments (coded to the usual ± 1 notation), 6 axial experiments (on the axis at a distance of ± α from the centre), and 9 replicates (at the centre of the experimental domain). Experiments results suggest the efficiency of this strain to remove the tested dyes on the 3 media studied, although Trypticasein Soy Broth (TSB) was the most suitable medium. Indigo Carmine and Coomassie Brilliant Blue at maximal tested concentration 150 mg/l were completely decolourised, meanwhile, an acceptable removal was observed using the more complicate dye Remazol Brilliant Blue R at a concentration of 50 mg/l.

Keywords: Bacillus aryabhattai, dyes, decolourisation, central composite design

Procedia PDF Downloads 207
23995 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano

Authors: Guo Wenyu, Qu Youli

Abstract:

A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.

Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA

Procedia PDF Downloads 237
23994 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet

Authors: Wanjiku Karanja

Abstract:

Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.

Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook

Procedia PDF Downloads 111
23993 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 182
23992 Data Quality Enhancement with String Length Distribution

Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda

Abstract:

Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.

Keywords: string classification, data quality, feature selection, probability distribution, string length

Procedia PDF Downloads 307
23991 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 60