Search results for: data block
24676 The Influence of Housing Choice Vouchers on the Private Rental Market
Authors: Randy D. Colon
Abstract:
Through a freedom of information request, data pertaining to Housing Choice Voucher (HCV) households has been obtained from the Chicago Housing Authority, including rent price and number of bedrooms per HCV household, community area, and zip code from 2013 to the first quarter of 2018. Similar data pertaining to the private rental market will be obtained through public records found through the United States Department of Housing and Urban Development. The datasets will be analyzed through statistical and mapping software to investigate the potential link between HCV households and distorted rent prices. Quantitative data will be supplemented by qualitative data to investigate the lived experience of Chicago residents. Qualitative data will be collected at community meetings in the Chicago Englewood neighborhood through participation in neighborhood meetings and informal interviews with residents and community leaders. The qualitative data will be used to gain insight on the lived experience of community leaders and residents of the Englewood neighborhood in relation to housing, the rental market, and HCV. While there is an abundance of quantitative data on this subject, this qualitative data is necessary to capture the lived experience of local residents effected by a changing rental market. This topic reflects concerns voiced by members of the Englewood community, and this study aims to keep the community relevant in its findings.Keywords: Chicago, housing, housing choice voucher program, housing subsidies, rental market
Procedia PDF Downloads 11724675 The Dynamic Metadata Schema in Neutron and Photon Communities: A Case Study of X-Ray Photon Correlation Spectroscopy
Authors: Amir Tosson, Mohammad Reza, Christian Gutt
Abstract:
Metadata stands at the forefront of advancing data management practices within research communities, with particular significance in the realms of neutron and photon scattering. This paper introduces a groundbreaking approach—dynamic metadata schema—within the context of X-ray Photon Correlation Spectroscopy (XPCS). XPCS, a potent technique unravelling nanoscale dynamic processes, serves as an illustrative use case to demonstrate how dynamic metadata can revolutionize data acquisition, sharing, and analysis workflows. This paper explores the challenges encountered by the neutron and photon communities in navigating intricate data landscapes and highlights the prowess of dynamic metadata in addressing these hurdles. Our proposed approach empowers researchers to tailor metadata definitions to the evolving demands of experiments, thereby facilitating streamlined data integration, traceability, and collaborative exploration. Through tangible examples from the XPCS domain, we showcase how embracing dynamic metadata standards bestows advantages, enhancing data reproducibility, interoperability, and the diffusion of knowledge. Ultimately, this paper underscores the transformative potential of dynamic metadata, heralding a paradigm shift in data management within the neutron and photon research communities.Keywords: metadata, FAIR, data analysis, XPCS, IoT
Procedia PDF Downloads 6024674 Exploring SSD Suitable Allocation Schemes Incompliance with Workload Patterns
Authors: Jae Young Park, Hwansu Jung, Jong Tae Kim
Abstract:
Whether the data has been well parallelized is an important factor in the Solid-State-Drive (SSD) performance. SSD parallelization is affected by allocation scheme and it is directly connected to SSD performance. There are dynamic allocation and static allocation in representative allocation schemes. Dynamic allocation is more adaptive in exploiting write operation parallelism, while static allocation is better in read operation parallelism. Therefore, it is hard to select the appropriate allocation scheme when the workload is mixed read and write operations. We simulated conditions on a few mixed data patterns and analyzed the results to help the right choice for better performance. As the results, if data arrival interval is long enough prior operations to be finished and continuous read intensive data environment static allocation is more suitable. Dynamic allocation performs the best on write performance and random data patterns.Keywords: dynamic allocation, NAND flash based SSD, SSD parallelism, static allocation
Procedia PDF Downloads 33824673 The Effects of Sewage Sludge Usage and Manure on Some Heavy Metals Uptake in Savory (Satureja Hortensis L.)
Authors: Abbas Hani
Abstract:
In recent decades with the development of technology and lack of food sources, sewage sludge in production of human foods is inevitable. Various sources of municipal and industrial sewage sludge that is produced can provide the requirement of plant nutrients. Soils in arid, semi-arid climate of central Iran that most affected by water drainage, iron and zinc deficiencies, using of sewage sludge is helpful. Therefore, the aim of this study is investigation of sewage sludge and manure application on Ni and Zn uptake by Savory. An experiment in a randomized complete block design with three replications was performed. Sewage sludge treatments consisted of four levels, control, 15, 30, 80 tons per hectares, the manure was used in four levels of control, 20, 40 and 80 tons per hectare. Results showed that the wet and dry weights was not affected by sewage sludge using, while, manure has significant effect on them. The effect of sewage sludge on the cadmium and lead concentrations were significant. Interactions of sewage sludge and manure on dry weight values were not significant. Compare mean analysis showed that increasing the amount of sewage sludge had no significant effect on cadmium concentration and it reduced when sewage sludge usage increased. This is probably due to increased plant growth and reduced concentrations of these elements in the plant.Keywords: savory, lead, cadmium, sewage sludge, manure
Procedia PDF Downloads 42024672 Social Data Aggregator and Locator of Knowledge (STALK)
Authors: Rashmi Raghunandan, Sanjana Shankar, Rakshitha K. Bhat
Abstract:
Social media contributes a vast amount of data and information about individuals to the internet. This project will greatly reduce the need for unnecessary manual analysis of large and diverse social media profiles by filtering out and combining the useful information from various social media profiles, eliminating irrelevant data. It differs from the existing social media aggregators in that it does not provide a consolidated view of various profiles. Instead, it provides consolidated INFORMATION derived from the subject’s posts and other activities. It also allows analysis over multiple profiles and analytics based on several profiles. We strive to provide a query system to provide a natural language answer to questions when a user does not wish to go through the entire profile. The information provided can be filtered according to the different use cases it is used for.Keywords: social network, analysis, Facebook, Linkedin, git, big data
Procedia PDF Downloads 44224671 Data Integrity between Ministry of Education and Private Schools in the United Arab Emirates
Authors: Rima Shishakly, Mervyn Misajon
Abstract:
Education is similar to other businesses and industries. Achieving data integrity is essential in order to attain a significant supporting for all the stakeholders in the educational sector. Efficient data collect, flow, processing, storing and retrieving are vital in order to deliver successful solutions to the different stakeholders. Ministry of Education (MOE) in United Arab Emirates (UAE) has adopted ‘Education 2020’ a series of five-year plans designed to introduce advanced education management information systems. As part of this program, in 2010 MOE implemented Student Information Systems (SIS) to manage and monitor the students’ data and information flow between MOE and international private schools in UAE. This paper is going to discuss data integrity concerns between MOE, and private schools. The paper will clarify the data integrity issues and will indicate the challenges that face private schools in UAE.Keywords: education management information systems (EMIS), student information system (SIS), United Arab Emirates (UAE), ministry of education (MOE), (KHDA) the knowledge and human development authority, Abu Dhabi educational counsel (ADEC)
Procedia PDF Downloads 22024670 Effects of Different Sowing Dates on Oil Yield of Castor (Ricinus communis L.)
Authors: Özden Öztürk, Gözde Pınar Gerem, Ayça Yenici, Burcu Haspolat
Abstract:
Castor (Ricinus communis L.) is one of the important non-edible oilseed crops having immense industrial and medicinal value. Oil yield per unit area is the ultimate target in growing oilseed plants and sowing date is one of the important factors which have a clear role in the production of active substances particularly in oilseeds. This study was conducted to evaluate the effect of sowing date on the seed and oil yield of castor in Central Anatolia in Turkey in 2011. The field experiment was set up in a completely randomized block design with three replication. Black Diamond-2 castor cultivar was used as plant material. The treatment was four sowing dates of May 10, May 25, June 10, June 25. In this research; seed yield, oil content and oil yield were investigated. Results showed that the effect of different sowing dates was significant on all of the characteristics. In general; delayed sowing dates, resulted in decreased seed yield, oil content and oil yield. The highest value of seed yield, oil content and oil yield (respectively, 2523.7 kg ha-1, 51.18% and 1292.2 kg ha-1) were obtained from the first sowing date (May 10) while the lowest seed yield, oil content and oil yield (respectively, 1550 kg ha-1, 43.67%, 677.3 kg ha-1) were recorded from the latest sowing date (June 25). Therefore, it can be concluded that early May could be recommended as an appropriate sowing date in the studied location and similar climates for achieved high oil yield of castor.Keywords: castor bean, Ricinus communis L., sowing date, seed yield, oil content
Procedia PDF Downloads 38124669 Application of Groundwater Model for Optimization of Denitrification Strategies to Minimize Public Health Risk
Authors: Mukesh A. Modi
Abstract:
High-nitrate concentration in groundwater of unconfined aquifers has been a serious issue for public health risk at a global scale. Various anthropogenic activities in agricultural land and urban land of alluvial soil have been observed to be responsible for the increment of nitrate in groundwater. The present study was designed to identify suitable denitrification strategies to minimize the effects of high nitrate in groundwater near the Mahi River of Vadodara block, Gujarat. There were 11 wells of Jal Jeevan Mission, Ministry of Jal Shakti, along with 3 observation wells of Gujarat Water Resources Development Corporation have been used for the duration of 21 years. MODFLOW and MT3DMS codes have been used to simulate solute transport phenomena along with attempted effectively for optimization. Current research is one step ahead by optimizing various denitrification strategies with the simulation of the model. The in-situ and ex-situ denitrification strategies viz. NAS (No Action Scenario), CAS (Crop Alternation Scenario), PS (Phytoremediation Scenario), and CAS + PS (Crop Alternation Scenario + Phytoremediation Scenario) have been selected for the optimization. The groundwater model simulates the most suitable denitrification strategy considering the hydrogeological characteristics at the targeted well.Keywords: groundwater, high nitrate, MODFLOW, MT3DMS, optimization, denitrification strategy
Procedia PDF Downloads 3024668 Towards a Balancing Medical Database by Using the Least Mean Square Algorithm
Authors: Kamel Belammi, Houria Fatrim
Abstract:
imbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of imbalanced data sets. In medical diagnosis classification, we often face the imbalanced number of data samples between the classes in which there are not enough samples in rare classes. In this paper, we proposed a learning method based on a cost sensitive extension of Least Mean Square (LMS) algorithm that penalizes errors of different samples with different weight and some rules of thumb to determine those weights. After the balancing phase, we applythe different classifiers (support vector machine (SVM), k- nearest neighbor (KNN) and multilayer neuronal networks (MNN)) for balanced data set. We have also compared the obtained results before and after balancing method.Keywords: multilayer neural networks, k- nearest neighbor, support vector machine, imbalanced medical data, least mean square algorithm, diabetes
Procedia PDF Downloads 53124667 New Formulation of FFS3 Layered Blown Films Containing Toughened Polypropylene and Plastomer with Superior Properties
Authors: S. Talebnezhad, S. Pourmahdian, D. Soudbar, M. Khosravani, J. Merasi
Abstract:
Adding toughened polypropylene and plastomer in FFS 3 layered blown film formulation resulted in superior dart impact and MD tear resistance along with acceptable tensile properties in TD and MD. The optimum loading of toughened polypropylene and plastomer in each layer depends on miscibility of polypropylene in polyethylene medium, mechanical properties, welding characteristics in bags top and bottoms and friction coefficient of film surfaces. Film property tests and efficiency of FFS machinery during processing in industrial scale showed that about 4% loading of plastomer and 16% of toughened polypropylene (reactor grade) in middle layer and loading of 0-1% plastomer and 5-19% of toughened polypropylene in other layers results optimum characteristics in the formulation based on 1-butene LLDPE grade with MFR of 0.9 and LDPE grade with MFI of 0.3. Both the plastomer and toughened polypropylene had the MFI of blow 1 and the TiO2 and processing aid masterbatches loading was 2%. The friction coefficient test results also represented the anti-block masterbatch could be omitted from formulation with adding toughened polypropylene due to partial miscibility of PP in PE which makes the surface of films somewhat bristly.Keywords: FFS 3 layered blown film, toughened polypropylene, plastomer, dart impact, tear resistance
Procedia PDF Downloads 40824666 Data Protection, Data Privacy, Research Ethics in Policy Process Towards Effective Urban Planning Practice for Smart Cities
Authors: Eugenio Ferrer Santiago
Abstract:
The growing complexities of the modern world on high-end gadgets, software applications, scams, identity theft, and Artificial Intelligence (AI) make the “uninformed” the weak and vulnerable to be victims of cybercrimes. Artificial Intelligence is not a new thing in our daily lives; the principles of database management, logical programming, and garbage in and garbage out are all connected to AI. The Philippines had in place legal safeguards against the abuse of cyberspace, but self-regulation of key industry players and self-protection by individuals are primordial to attain the success of these initiatives. Data protection, Data Privacy, and Research Ethics must work hand in hand during the policy process in the course of urban planning practice in different environments. This paper focuses on the interconnection of data protection, data privacy, and research ethics in coming up with clear-cut policies against perpetrators in the urban planning professional practice relevant in sustainable communities and smart cities. This paper shall use expository methodology under qualitative research using secondary data from related literature, interviews/blogs, and the World Wide Web resources. The claims and recommendations of this paper will help policymakers and implementers in the policy cycle. This paper shall contribute to the body of knowledge as a simple treatise and communication channel to the reading community and future researchers to validate the claims and start an intellectual discourse for better knowledge generation for the good of all in the near future.Keywords: data privacy, data protection, urban planning, research ethics
Procedia PDF Downloads 5724665 Review of the Road Crash Data Availability in Iraq
Authors: Abeer K. Jameel, Harry Evdorides
Abstract:
Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.Keywords: road safety, Iraq, crash data, road risk assessment, The International Road Assessment Program (iRAP)
Procedia PDF Downloads 25324664 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting - The Wicked Method
Authors: Sinead Impey, Damon Berry, Selma Furtado, Miriam Galvin, Loretto Grogan, Orla Hardiman, Lucy Hederman, Mark Heverin, Vincent Wade, Linda Douris, Declan O'Sullivan, Gaye Stephens
Abstract:
Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.Keywords: healthcare, knowledge acquisition, maximal data sets, action design science
Procedia PDF Downloads 35424663 Tool for Metadata Extraction and Content Packaging as Endorsed in OAIS Framework
Authors: Payal Abichandani, Rishi Prakash, Paras Nath Barwal, B. K. Murthy
Abstract:
Information generated from various computerization processes is a potential rich source of knowledge for its designated community. To pass this information from generation to generation without modifying the meaning is a challenging activity. To preserve and archive the data for future generations it’s very essential to prove the authenticity of the data. It can be achieved by extracting the metadata from the data which can prove the authenticity and create trust on the archived data. Subsequent challenge is the technology obsolescence. Metadata extraction and standardization can be effectively used to resolve and tackle this problem. Metadata can be categorized at two levels i.e. Technical and Domain level broadly. Technical metadata will provide the information that can be used to understand and interpret the data record, but only this level of metadata isn’t sufficient to create trustworthiness. We have developed a tool which will extract and standardize the technical as well as domain level metadata. This paper is about the different features of the tool and how we have developed this.Keywords: digital preservation, metadata, OAIS, PDI, XML
Procedia PDF Downloads 39124662 The Trigger-DAQ System in the Mu2e Experiment
Authors: Antonio Gioiosa, Simone Doanti, Eric Flumerfelt, Luca Morescalchi, Elena Pedreschi, Gianantonio Pezzullo, Ryan A. Rivera, Franco Spinella
Abstract:
The Mu2e experiment at Fermilab aims to measure the charged-lepton flavour violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. With the expected experimental sensitivity, Mu2e will improve the previous limit of four orders of magnitude. The Mu2e data acquisition (DAQ) system provides hardware and software to collect digitized data from the tracker, calorimeter, cosmic ray veto, and beam monitoring systems. Mu2e’s trigger and data acquisition system (TDAQ) uses otsdaq as its solution. developed at Fermilab, otsdaq uses the artdaq DAQ framework and art analysis framework, under-the-hood, for event transfer, filtering, and processing. Otsdaq is an online DAQ software suite with a focus on flexibility and scalability while providing a multi-user, web-based interface accessible through the Chrome or Firefox web browser. The detector read out controller (ROC) from the tracker and calorimeter stream out zero-suppressed data continuously to the data transfer controller (DTC). Data is then read over the PCIe bus to a software filter algorithm that selects events which are finally combined with the data flux that comes from a cosmic ray veto system (CRV).Keywords: trigger, daq, mu2e, Fermilab
Procedia PDF Downloads 15424661 An Improved Parallel Algorithm of Decision Tree
Authors: Jiameng Wang, Yunfei Yin, Xiyu Deng
Abstract:
Parallel optimization is one of the important research topics of data mining at this stage. Taking Classification and Regression Tree (CART) parallelization as an example, this paper proposes a parallel data mining algorithm based on SSP-OGini-PCCP. Aiming at the problem of choosing the best CART segmentation point, this paper designs an S-SP model without data association; and in order to calculate the Gini index efficiently, a parallel OGini calculation method is designed. In addition, in order to improve the efficiency of the pruning algorithm, a synchronous PCCP pruning strategy is proposed in this paper. In this paper, the optimal segmentation calculation, Gini index calculation, and pruning algorithm are studied in depth. These are important components of parallel data mining. By constructing a distributed cluster simulation system based on SPARK, data mining methods based on SSP-OGini-PCCP are tested. Experimental results show that this method can increase the search efficiency of the best segmentation point by an average of 89%, increase the search efficiency of the Gini segmentation index by 3853%, and increase the pruning efficiency by 146% on average; and as the size of the data set increases, the performance of the algorithm remains stable, which meets the requirements of contemporary massive data processing.Keywords: classification, Gini index, parallel data mining, pruning ahead
Procedia PDF Downloads 12124660 Numerical Multi-Scale Modeling of Rubber Friction on Rough Pavements Using Finite Element Method
Authors: Ashkan Nazari, Saied Taheri
Abstract:
Knowledge of tire-pavement interaction plays a crucial role in designing safer and more reliable tires. Characterizing the tire-pavement frictional interaction leads to a better understanding of vehicle performance in braking and acceleration. In this work, we devise a multi-scale simulation approach to incorporate the effect of pavement surface asperities in different length-scales. We construct two- and three-dimensional Finite Element (FE) models to simulate the interaction between a rubber block and a rough pavement surface with asperities in different scales. To achieve this, the road profile is scanned via a laser profilometer and the obtained asperities are implemented in an FE software (ABAQUS) in micro and macro length-scales. The hysteresis friction, which is due to the dissipative nature of rubber, is the main component of the friction force and therefore is the subject of study in this work. Using different scales not only will assist in characterizing the pavement asperities with sufficient details but also, it is highly effective in preventing extreme local deformations and stress gradients which results in divergence in FE simulations. The simulation results will be validated with experimental results as well as the results reported in the literature.Keywords: friction, finite element, multi-scale modeling, rubber
Procedia PDF Downloads 13524659 Climate Change and the Invasive Alien Species of Western Himalayan State of India
Authors: Yashasvi Thakur, Vikas K. Sharma
Abstract:
The fragile Himalayan ecosystems are sensitive to environmental stresses, including direct and indirect impacts of climate stresses. A total of 297 naturalized alien plant species belonging to 65 families in the IHR have already been reported. Of the total 297 naturalized alien plant species in IHR, the maximum species occur in Himachal Pradesh (232; 78.1%), followed by Jammu & Kashmir (192; 64.6%) and Uttarakhand (181; 60.90%). The present study reports the spread of some invasive and existing weed species like Ageratum conyzoides, Bidens pilosa, Chromolaena odorata, Lantana camara, Brossnetia papyrifera, Oxalis corniculata, Galinsoga parviflora, Panicum maximum at an extent that they are not only invading the agricultural fields but are also replacing the native plant species and degrading the existing grassland quality. Moreover, the degradation of grassland has led to the dry fodder shortage for livestock in the lower Shivalik ranges of the state of Himachal Pradesh and has also encouraged the use of herbicides at an extensive scale. This article provides a mapping of the current spread of some of these species at the block level to allow the development of appropriate management strategies and policy planning for addressing issues pertaining to plant invasion, agricultural fields, and grasslands across the IHR states.Keywords: climate change, invasive alien species, agriculture, grassland, IHR
Procedia PDF Downloads 7324658 Synthesis of Biofuels of New Generation
Authors: Selena Gutiérrez, Araceli Martínez
Abstract:
One of the most important challenges worldwide, scientific and technological, is to have a sustainable energy source; friendly to the environment and widely available. Currently, the 85% of the energy used comes from the fossil sources. Another important environmental problem is that several rubber products (tires, gloves, hoses, among others) are discarded practically without any treatment. In nature, the degradation of such products will take at least 500 years. In 2009, the worldwide rubber production was about 23.6 million tons. In order to solve this problems, our research focus in an alternative synthesis of biofuels in a two-step approach: The metathesis degradation of industrial rubber (models of rubber waste), and the oligomers transesterification. Thus, cis-1,4-polybutadiene (Mn= 9.1x105, Mw/Mn= 2.2) and styrene-butadiene block copolymers with 30% (Mn= 1.61x105; Mw/Mn= 1.3) and 21% wt styrene (Mn= 1.92x105; Mw/Mn= 1.4) were degraded via metathesis with soybean oil as chain transfer agent (CTA) and green solvent; using [(PCy3)2Cl2Ru=CHPh] and [(1,3-diphenyl-4,5-dihydroimidazol-2-ylidene)(PCy3)Ru=CHPh] catalysts. Afterwards, the products were transesterified by basic homogeneous catalysis. Before transesterification, the polystyrene microblocks (Mn= 16,761; Mw/Mn= 1.2) were isolated. Finally, the biofuels obtained (BO) were purified, characterized and showed similar properties to standards biodiesel (SB) (Norms: EN 14214-03 and ASTM D6751-02), i.e. (SB / BO): molecular weight [Daltons] (570 / 543-596), density [g/cm3] (0.86-0.90 / 0.88), kinematic viscosity [mm2/s] (1.90-6.0 / 3.5-4.5), iodine (97 / 97-98) and cetane number (Min.47 / 56-58).Keywords: biofuels, industrial rubber, metathesis, vegetable oils
Procedia PDF Downloads 25724657 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 8824656 Data Security: An Enhancement of E-mail Security Algorithm to Secure Data Across State Owned Agencies
Authors: Lindelwa Mngomezulu, Tonderai Muchenje
Abstract:
Over the decades, E-mails provide easy, fast and timely communication enabling businesses and state owned agencies to communicate with their stakeholders and with their own employees in real-time. Moreover, since the launch of Microsoft office 365 and many other clouds based E-mail services, many businesses have been migrating from the on premises E-mail services to the cloud and more precisely since the beginning of the Covid-19 pandemic, there has been a significant increase of E-mails utilization, which then leads to the increase of cyber-attacks. In that regard, E-mail security has become very important in the E-mail transportation to ensure that the E-mail gets to the recipient without the data integrity being compromised. The classification of the features to enhance E-mail security for further from the enhanced cyber-attacks as we are aware that since the technology is advancing so at the cyber-attacks. Therefore, in order to maximize the data integrity we need to also maximize security of the E-mails such as enhanced E-mail authentication. The successful enhancement of E-mail security in the future may lessen the frequency of information thefts via E-mails, resulting in the data of South African State-owned agencies not being compromised.Keywords: e-mail security, cyber-attacks, data integrity, authentication
Procedia PDF Downloads 13524655 Semi-Supervised Outlier Detection Using a Generative and Adversary Framework
Authors: Jindong Gu, Matthias Schubert, Volker Tresp
Abstract:
In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.Keywords: one-class classification, outlier detection, generative adversary networks, semi-supervised learning
Procedia PDF Downloads 15124654 Testing the Change in Correlation Structure across Markets: High-Dimensional Data
Authors: Malay Bhattacharyya, Saparya Suresh
Abstract:
The Correlation Structure associated with a portfolio is subjected to vary across time. Studying the structural breaks in the time-dependent Correlation matrix associated with a collection had been a subject of interest for a better understanding of the market movements, portfolio selection, etc. The current paper proposes a methodology for testing the change in the time-dependent correlation structure of a portfolio in the high dimensional data using the techniques of generalized inverse, singular valued decomposition and multivariate distribution theory which has not been addressed so far. The asymptotic properties of the proposed test are derived. Also, the performance and the validity of the method is tested on a real data set. The proposed test performs well for detecting the change in the dependence of global markets in the context of high dimensional data.Keywords: correlation structure, high dimensional data, multivariate distribution theory, singular valued decomposition
Procedia PDF Downloads 12324653 Development and Evaluation of a Portable Ammonia Gas Detector
Authors: Jaheon Gu, Wooyong Chung, Mijung Koo, Seonbok Lee, Gyoutae Park, Sangguk Ahn, Hiesik Kim, Jungil Park
Abstract:
In this paper, we present a portable ammonia gas detector for performing the gas safety management efficiently. The display of the detector is separated from its body. The display module is received the data measured from the detector using ZigBee. The detector has a rechargeable li-ion battery which can be use for 11~12 hours, and a Bluetooth module for sending the data to the PC or the smart devices. The data are sent to the server and can access using the web browser or mobile application. The range of the detection concentration is 0~100ppm.Keywords: ammonia, detector, gas, portable
Procedia PDF Downloads 41324652 Development of a Shape Based Estimation Technology Using Terrestrial Laser Scanning
Authors: Gichun Cha, Byoungjoon Yu, Jihwan Park, Minsoo Park, Junghyun Im, Sehwan Park, Sujung Sin, Seunghee Park
Abstract:
The goal of this research is to estimate a structural shape change using terrestrial laser scanning. This study proceeds with development of data reduction and shape change estimation algorithm for large-capacity scan data. The point cloud of scan data was converted to voxel and sampled. Technique of shape estimation is studied to detect changes in structure patterns, such as skyscrapers, bridges, and tunnels based on large point cloud data. The point cloud analysis applies the octree data structure to speed up the post-processing process for change detection. The point cloud data is the relative representative value of shape information, and it used as a model for detecting point cloud changes in a data structure. Shape estimation model is to develop a technology that can detect not only normal but also immediate structural changes in the event of disasters such as earthquakes, typhoons, and fires, thereby preventing major accidents caused by aging and disasters. The study will be expected to improve the efficiency of structural health monitoring and maintenance.Keywords: terrestrial laser scanning, point cloud, shape information model, displacement measurement
Procedia PDF Downloads 23324651 A Non-Invasive Blood Glucose Monitoring System Using near-Infrared Spectroscopy with Remote Data Logging
Authors: Bodhayan Nandi, Shubhajit Roy Chowdhury
Abstract:
This paper presents the development of a portable blood glucose monitoring device based on Near-Infrared Spectroscopy. The system supports Internet connectivity through WiFi and uploads the time series data of glucose concentration of patients to a server. In addition, the server is given sufficient intelligence to predict the future pathophysiological state of a patient given the current and past pathophysiological data. This will enable to prognosticate the approaching critical condition of the patient much before the critical condition actually occurs.The server hosts web applications to allow authorized users to monitor the data remotely.Keywords: non invasive, blood glucose concentration, microcontroller, IoT, application server, database server
Procedia PDF Downloads 21524650 Proposal to Increase the Efficiency, Reliability and Safety of the Centre of Data Collection Management and Their Evaluation Using Cluster Solutions
Authors: Martin Juhas, Bohuslava Juhasova, Igor Halenar, Andrej Elias
Abstract:
This article deals with the possibility of increasing efficiency, reliability and safety of the system for teledosimetric data collection management and their evaluation as a part of complex study for activity “Research of data collection, their measurement and evaluation with mobile and autonomous units” within project “Research of monitoring and evaluation of non-standard conditions in the area of nuclear power plants”. Possible weaknesses in existing system are identified. A study of available cluster solutions with possibility of their deploying to analysed system is presented.Keywords: teledosimetric data, efficiency, reliability, safety, cluster solution
Procedia PDF Downloads 51324649 Efficient Storage in Cloud Computing by Using Index Replica
Authors: Bharat Singh Deora, Sushma Satpute
Abstract:
Cloud computing is based on resource sharing. Like other resources which can be shareable, storage is a resource which can be shared. We can use collective resources of storage from different locations and maintain a central index table for storage details. The storage combining of different places can form a suitable data storage which is operated from one location and is very economical. Proper storage of data should improve data reliability & availability and bandwidth utilization. Also, we are moving the contents of one storage to other according to our need.Keywords: cloud computing, cloud storage, Iaas, PaaS, SaaS
Procedia PDF Downloads 34024648 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 24924647 Deep Supervision Based-Unet to Detect Buildings Changes from VHR Aerial Imagery
Authors: Shimaa Holail, Tamer Saleh, Xiongwu Xiao
Abstract:
Building change detection (BCD) from satellite imagery is an essential topic in urbanization monitoring, agricultural land management, and updating geospatial databases. Recently, methods for detecting changes based on deep learning have made significant progress and impressive results. However, it has the problem of being insensitive to changes in buildings with complex spectral differences, and the features being extracted are not discriminatory enough, resulting in incomplete buildings and irregular boundaries. To overcome these problems, we propose a dual Siamese network based on the Unet model with the addition of a deep supervision strategy (DS) in this paper. This network consists of a backbone (encoder) based on ImageNet pre-training, a fusion block, and feature pyramid networks (FPN) to enhance the step-by-step information of the changing regions and obtain a more accurate BCD map. To train the proposed method, we created a new dataset (EGY-BCD) of high-resolution and multi-temporal aerial images captured over New Cairo in Egypt to detect building changes for this purpose. The experimental results showed that the proposed method is effective and performs well with the EGY-BCD dataset regarding the overall accuracy, F1-score, and mIoU, which were 91.6 %, 80.1 %, and 73.5 %, respectively.Keywords: building change detection, deep supervision, semantic segmentation, EGY-BCD dataset
Procedia PDF Downloads 118