Search results for: depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9525

Search results for: depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices

1575 Utilizing Dowel-Laminated Mass Timber Components in Residential Multifamily Structures: A Case Study

Authors: Theodore Panton

Abstract:

As cities in the United States experience critical housing shortages, mass timber presents the opportunity to address this crisis in housing supply while taking advantage of the carbon-positive benefits of sustainably forested wood fiber. Mass timber, however, currently has a low level of adoption in residential multifamily structures due to the risk-averse nature of change within the construction financing, Architecture / Engineering / Contracting (AEC) communities, as well as various agency approval challenges. This study demonstrates how mass timber can be used within the cost and feasibility parameters of a typical multistory residential structure and ultimately address the need for dense urban housing. This study will utilize The Garden District, a mixed-use market-rate housing project in Woodinville, Washington, as a case study to illuminate the potential of mass timber in this application. The Garden District is currently in final stages of permit approval and will commence construction in 2023. It will be the tallest dowel-laminated timber (DLT) residential structure in the United States when completed. This case study includes economic, technical, and design reference points to demonstrate the relevance of the use of this system and its ability to deliver “triple bottom line” results. In terms of results, the study establishes scalable and repeatable approaches to project design and delivery of mass timber in multifamily residential uses and includes economic data, technical solutions, and a summary of end-user advantages. This study discusses the third party tested systems for satisfying acoustical requirements within dwelling units, a key to resolving the use of mass timber within multistory residential use. Lastly, the study will also compare the mass timber solution with a comparable cold formed steel (CFS) system with a similar program, which indicates a net carbon savings of over three million tons over the life cycle of the building.

Keywords: DLT, dowell laminated timber, mass timber, market rate multifamily

Procedia PDF Downloads 102
1574 The Role of Acoustical Design within Architectural Design in the Early Design Phase

Authors: O. Wright, N. Perkins, M. Donn, M. Halstead

Abstract:

This research responded to anecdotal evidence that suggested inefficiencies within the Architect and Acoustician relationship may lead to ineffective acoustic design decisions.  The acoustician spoken to believed that he was approached too late in the design phase. The approached architect valued acoustical qualities, yet, struggled to interpret common measurement parameters. The preliminary investigation of these opinions indicated a gap in the current New Zealand Architectural discourse and currently informs the creation of a 2016 Master of Architecture (Prof) thesis research. Little meaningful information about acoustic intervention in the early design phase could be found from past literature. In the information that was sourced, authors focus on software as an incorporation tool without investigating why the flaws in the relationship originally exist. To further explore this relationship, a survey was designed. It underwent three phases to ensure its consistency, and was delivered to a group of 51 acousticians from one international Acoustics company. The results were then separated between New Zealand and off-shore to identify trends. The survey results suggest that 75% of acousticians meet the architect less than 5 times per project. Instead of regular contact, a mediated method is adopted though a mix of telecommunication and written reports. Acousticians tend to be introduced later into New Zealand building project than the corresponding off-shore building. This delay corresponds to an increase in remedial action for each of the building types in the survey except Auditoria and Office Buildings. 31 participants have had their specifications challenged by an architect. Furthermore, 71% of the acousticians believe that architects do not have the knowledge to understand why the acoustic specifications are in place. The issues raised in this investigation align to the colloquial evidence expressed by the two consultants. It identifies a larger gap in the industry were acoustics is remedially treated rather than identified as a possible design driver. Further research through design is suggested to understand the role of acoustics within architectural design and potential tools for its inclusion during, not after, the design process.

Keywords: architectural acoustics, early-design, interdisciplinary communication, remedial response

Procedia PDF Downloads 239
1573 Speed Breaker/Pothole Detection Using Hidden Markov Models: A Deep Learning Approach

Authors: Surajit Chakrabarty, Piyush Chauhan, Subhasis Panda, Sujoy Bhattacharya

Abstract:

A large proportion of roads in India are not well maintained as per the laid down public safety guidelines leading to loss of direction control and fatal accidents. We propose a technique to detect speed breakers and potholes using mobile sensor data captured from multiple vehicles and provide a profile of the road. This would, in turn, help in monitoring roads and revolutionize digital maps. Incorporating randomness in the model formulation for detection of speed breakers and potholes is crucial due to substantial heterogeneity observed in data obtained using a mobile application from multiple vehicles driven by different drivers. This is accomplished with Hidden Markov Models, whose hidden state sequence is found for each time step given the observables sequence, and are then fed as input to LSTM network with peephole connections. A precision score of 0.96 and 0.63 is obtained for classifying bumps and potholes, respectively, a significant improvement from the machine learning based models. Further visualization of bumps/potholes is done by converting time series to images using Markov Transition Fields where a significant demarcation among bump/potholes is observed.

Keywords: deep learning, hidden Markov model, pothole, speed breaker

Procedia PDF Downloads 129
1572 Regional Low Gravity Anomalies Influencing High Concentrations of Heavy Minerals on Placer Deposits

Authors: T. B. Karu Jayasundara

Abstract:

Regions of low gravity and gravity anomalies both influence heavy mineral concentrations on placer deposits. Economically imported heavy minerals are likely to have higher levels of deposition in low gravity regions of placer deposits. This can be found in coastal regions of Southern Asia, particularly in Sri Lanka and Peninsula India and areas located in the lowest gravity region of the world. The area about 70 kilometers of the east coast of Sri Lanka is covered by a high percentage of ilmenite deposits, and the southwest coast of the island consists of Monazite placer deposit. These deposits are one of the largest placer deposits in the world. In India, the heavy mineral industry has a good market. On the other hand, based on the coastal placer deposits recorded, the high gravity region located around Papua New Guinea, has no such heavy mineral deposits. In low gravity regions, with the help of other depositional environmental factors, the grains have more time and space to float in the sea, this helps bring high concentrations of heavy mineral deposits to the coast. The effect of low and high gravity can be demonstrated by using heavy mineral separation devices.  The Wilfley heavy mineral separating table is one of these; it is extensively used in industries and in laboratories for heavy mineral separation. The horizontally oscillating Wilfley table helps to separate heavy and light mineral grains in to deferent fractions, with the use of water. In this experiment, the low and high angle of the Wilfley table are representing low and high gravity respectively. A sample mixture of grain size <0.85 mm of heavy and light mineral grains has been used for this experiment. The high and low angle of the table was 60 and 20 respectively for this experiment. The separated fractions from the table are again separated into heavy and light minerals, with the use of heavy liquid, which consists of a specific gravity of 2.85. The fractions of separated heavy and light minerals have been used for drawing the two-dimensional graphs. The graphs show that the low gravity stage has a high percentage of heavy minerals collected in the upper area of the table than in the high gravity stage. The results of the experiment can be used for the comparison of regional low gravity and high gravity levels of heavy minerals. If there are any heavy mineral deposits in the high gravity regions, these deposits will take place far away from the coast, within the continental shelf.

Keywords: anomaly, gravity, influence, mineral

Procedia PDF Downloads 185
1571 Groundwater Monitoring Using a Community: Science Approach

Authors: Shobha Kumari Yadav, Yubaraj Satyal, Ajaya Dixit

Abstract:

In addressing groundwater depletion, it is important to develop evidence base so to be used in assessing the state of its degradation. Groundwater data is limited compared to meteorological data, which impedes the groundwater use and management plan. Monitoring of groundwater levels provides information base to assess the condition of aquifers, their responses to water extraction, land-use change, and climatic variability. It is important to maintain a network of spatially distributed, long-term monitoring wells to support groundwater management plan. Monitoring involving local community is a cost effective approach that generates real time data to effectively manage groundwater use. This paper presents the relationship between rainfall and spring flow, which are the main source of freshwater for drinking, household consumptions and agriculture in hills of Nepal. The supply and withdrawal of water from springs depends upon local hydrology and the meteorological characteristics- such as rainfall, evapotranspiration and interflow. The study offers evidence of the use of scientific method and community based initiative for managing groundwater and springshed. The approach presents a method to replicate similar initiative in other parts of the country for maintaining integrity of springs.

Keywords: citizen science, groundwater, water resource management, Nepal

Procedia PDF Downloads 186
1570 District 10 in Tehran: Urban Transformation and the Survey Evidence of Loss in Place Attachment in High Rises

Authors: Roya Morad, W. Eirik Heintz

Abstract:

The identity of a neighborhood is inevitably shaped by the architecture and the people of that place. Conventionally the streets within each neighborhood served as a semi-public-private extension of the private living spaces. The street as a design element formed a hybrid condition that was neither totally public nor private, and it encouraged social interactions. Thus through creating a sense of community, one of the most basic human needs of belonging was achieved. Similar to major global cities, Tehran has undergone serious urbanization. Developing into a capital city of high rises has resulted in an increase in urban density. Although allocating more residential units in each neighborhood was a critical response to the population boom and the limited land area of the city, it also created a crisis in terms of social communication and place attachment. District 10 in Tehran is a neighborhood that has undergone the most urban transformation among the other 22 districts in the capital and currently has the highest population density. This paper will explore how the active streets in district 10 have changed into their current condition of high rises with a lack of meaningful social interactions amongst its inhabitants. A residential building can be thought of as a large group of people. One would think that as the number of people increases, the opportunities for social communications would increase as well. However, according to the survey, there is an indirect relationship between the two. As the number of people of a residential building increases, the quality of each acquaintance reduces, and the depth of relationships between people tends to decrease. This comes from the anonymity of being part of a crowd and the lack of social spaces characterized by most high-rise apartment buildings. Without a sense of community, the attachment to a neighborhood is decreased. This paper further explores how the neighborhood participates to fulfill ones need for social interaction and focuses on the qualitative aspects of alternative spaces that can redevelop the sense of place attachment within the community.

Keywords: high density, place attachment, social communication, street life, urban transformation

Procedia PDF Downloads 111
1569 Typification and Determination of Antibiotic Resistance Rates of Stenotrophomonas Maltophilia Strains Isolated from Intensive Care Unit Patients in a University Practice and Research Hospital

Authors: Recep Kesli, Gulsah Asik, Cengiz Demir, Onur Turkyilmaz

Abstract:

Objective: Stenotrophomonas maltophilia (S. maltophilia) has recently emerged as an important nosocomial microorganism. Treatment of invasive infections caused by this organism is problematic because this microorganism is usually resistant to a wide range of commonly used antimicrobials. We aimed to evaluate clinical isolates of S. maltophilia in respect to sampling sites and antimicrobial resistant. Method: During a two years period (October 2013 and September 2015) eighteen samples collected from the intensive care unit (ICU) patients hospitalized in Afyon Kocatepe University, ANS Practice and Research Hospital. Identification of the bacteria was determined by conventional methods and automated identification system-VITEK 2 (bio-Mérieux, Marcy l’toile, France). Antibacterial resistance tests were performed by Kirby Bauer disc (Oxoid, England) diffusion method following the recommendations of CLSI. Results: Eighteen S. maltophilia strains were identified as the causative agents of different infections. The main type of infection was lower respiratory tract infection (83,4 %); three patients (16,6 %) had bloodstream infection. While, none of the 18 S. maltophilia strains were found to be resistant against to trimethoprim sulfametaxasole (TMP-SXT) and levofloxacine, eight strains 66.6 % were found to be resistant against ceftazidim. Conclusion: The isolation of S.maltophilia starains resistant to TMP-SXT is vital. In order to prevent or minimize infections due to S. maltophilia such precuations should be utilized: Avoidance of inappropriate antibiotic use, prolonged implementation of foreign devices, reinforcement of hand hygiene practices and the application of appropriate infection control practices. Microbiology laboratories also may play important roles in controlling S. maltophilia infections by monitoring the prevalence, continuously, the provision of local antibiotic resistance paterns data and the performance of synergistic studies also may help to guide appropirate antimicrobial therapy choices.

Keywords: Stenotrophomonas maltophilia, trimethoprim-sulfamethoxazole, antimicrobial resistance, Stenotrophomonas spp.

Procedia PDF Downloads 235
1568 Determination of Measurement Uncertainty of the Diagnostic Meteorological Model CALMET

Authors: Nina Miklavčič, Urška Kugovnik, Natalia Galkina, Primož Ribarič, Rudi Vončina

Abstract:

Today, the need for weather predictions is deeply rooted in the everyday life of people as well as it is in industry. The forecasts influence final decision-making processes in multiple areas, from agriculture and prevention of natural disasters to air traffic regulations and solutions on a national level for health, security, and economic problems. Namely, in Slovenia, alongside other existing forms of application, weather forecasts are adopted for the prognosis of electrical current transmission through powerlines. Meteorological parameters are one of the key factors which need to be considered in estimations of the reliable supply of electrical energy to consumers. And like for any other measured value, the knowledge about measurement uncertainty is also critical for the secure and reliable supply of energy. The estimation of measurement uncertainty grants us a more accurate interpretation of data, a better quality of the end results, and even a possibility of improvement of weather forecast models. In the article, we focused on the estimation of measurement uncertainty of the diagnostic microscale meteorological model CALMET. For the purposes of our research, we used a network of meteorological stations spread in the area of our interest, which enables a side-by-side comparison of measured meteorological values with the values calculated with the help of CALMET and the measurement uncertainty estimation as a final result.

Keywords: uncertancy, meteorological model, meteorological measurment, CALMET

Procedia PDF Downloads 61
1567 Interference Management in Long Term Evolution-Advanced System

Authors: Selma Sbit, Mohamed Bechir Dadi, Belgacem Chibani Rhaimi

Abstract:

Incorporating Home eNodeB (HeNB) in cellular networks, e.g. Long Term Evolution Advanced (LTE-A), is beneficial for extending coverage and enhancing capacity at low price especially within the non-line-of sight (NLOS) environments such as homes. HeNB or femtocell is a small low powered base station which provides radio coverage to the mobile users in an indoor environment. This deployment results in a heterogeneous network where the available spectrum becomes shared between two layers. Therefore, a problem of Inter Cell Interference (ICI) appears. This issue is the main challenge in LTE-A. To deal with this challenge, various techniques based on frequency, time and power control are proposed. This paper deals with the impact of carrier aggregation and higher order MIMO (Multiple Input Multiple Output) schemes on the LTE-Advanced performance. Simulation results show the advantages of these schemes on the system capacity (4.109 b/s/Hz when bandwidth B=100 MHz and when applying MIMO 8x8 for SINR=30 dB), maximum theoretical peak data rate (more than 4 Gbps for B=100 MHz and when MIMO 8x8 is used) and spectral efficiency (15 b/s/Hz and 30b/s/Hz when MIMO 4x4 and MIMO 8x8 are applying respectively for SINR=30 dB).

Keywords: capacity, carrier aggregation, LTE-Advanced, MIMO (Multiple Input Multiple Output), peak data rate, spectral efficiency

Procedia PDF Downloads 238
1566 Microfluidized Fiber Based Oleogels for Encapsulation of Lycopene

Authors: Behic Mert

Abstract:

This study reports a facile approach to structure soft solids from microfluidizer lycopene-rich plant based structure and oil. First carotenoid-rich plant material (pumpkin was used in this study) processed with high-pressure microfluidizer to release lycopene molecules, then an emulsion was formed by mixing processed plant material and oil. While, in emulsion state lipid soluble carotenoid molecules were allowed to dissolve in the oil phase, the fiber material of plant material provided the network which was required for emulsion stabilization. Additional hydrocolloids (gelatin, xhantan, and pectin) up to 0.5% were also used to reinforce the emulsion stability and their impact on final product properties were evaluated via rheological, textural and oxidation studies. Finally, water was removed from emulsion phase by drying in a tray dryer at 40°C for 36 hours, and subsequent shearing resulted in soft solid (ole gel) structures. The microstructure of these systems was revealed by cryo-scanning electron microscopy. Effect of hydrocolloids on total lycopene and surface lycopene contents were also evaluated. The surface lycopene was lowest in gelatin containing oleo gels and highest in pectin-containing oleo gels. This study outlines the novel emulsion-based structuring method that can be used to encapsulate lycopene without the need of separate extraction of them.

Keywords: lycopene, encapsulation, fiber, oleo gel

Procedia PDF Downloads 251
1565 An Enhanced Hybrid Backoff Technique for Minimizing the Occurrence of Collision in Mobile Ad Hoc Networks

Authors: N. Sabiyath Fatima, R. K. Shanmugasundaram

Abstract:

In Mobile Ad-hoc Networks (MANETS), every node performs both as transmitter and receiver. The existing backoff models do not exactly forecast the performance of the wireless network. Also, the existing models experience elevated packet collisions. Every time a collision happens, the station’s contention window (CW) is doubled till it arrives at the utmost value. The main objective of this paper is to diminish collision by means of contention window Multiplicative Increase Decrease Backoff (CWMIDB) scheme. The intention of rising CW is to shrink the collision possibility by distributing the traffic into an outsized point in time. Within wireless Ad hoc networks, the CWMIDB algorithm dynamically controls the contention window of the nodes experiencing collisions. During packet communication, the backoff counter is evenly selected from the given choice of [0, CW-1]. At this point, CW is recognized as contention window and its significance lies on the amount of unsuccessful transmission that had happened for the packet. On the initial transmission endeavour, CW is put to least amount value (C min), if transmission effort fails, subsequently the value gets doubled, and once more the value is set to least amount on victorious broadcast. CWMIDB is simulated inside NS2 environment and its performance is compared with Binary Exponential Backoff Algorithm. The simulation results show improvement in transmission probability compared to that of the existing backoff algorithm.

Keywords: backoff, contention window, CWMIDB, MANET

Procedia PDF Downloads 260
1564 Microseismicity of the Tehran Region Based on Three Seismic Networks

Authors: Jamileh Vasheghani Farahani

Abstract:

The main purpose of this research is to show the current active faults and active tectonic of the area by three seismic networks in Tehran region: 1-Tehran Disaster Mitigation and Management Organization (TDMMO), 2-Broadband Iranian National Seismic Network Center (BIN), 3-Iranian Seismological Center (IRSC). In this study, we analyzed microearthquakes happened in Tehran city and its surroundings using the Tehran networks from 1996 to 2015. We found some active faults and trends in the region. There is a 200-year history of historical earthquakes in Tehran. Historical and instrumental seismicity show that the east of Tehran is more active than the west. The Mosha fault in the North of Tehran is one of the active faults of the central Alborz. Moreover, other major faults in the region are Kahrizak, Eyvanakey, Parchin and North Tehran faults. An important seismicity region is an intersection of the Mosha and North Tehran fault systems (Kalan village in Lavasan). This region shows a cluster of microearthquakes. According to the historical and microseismic events analyzed in this research, there is a seismic gap in SE of Tehran. The empirical relationship is used to assess the Mmax based on the rupture length. There is a probability of occurrence of a strong motion of 7.0 to 7.5 magnitudes in the region (based on the assessed capability of the major faults such as Parchin and Eyvanekey faults and historical earthquakes).

Keywords: Iran, major faults, microseismicity, Tehran

Procedia PDF Downloads 353
1563 Disentangling the Sources and Context of Daily Work Stress: Study Protocol of a Comprehensive Real-Time Modelling Study Using Portable Devices

Authors: Larissa Bolliger, Junoš Lukan, Mitja Lustrek, Dirk De Bacquer, Els Clays

Abstract:

Introduction and Aim: Chronic workplace stress and its health-related consequences like mental and cardiovascular diseases have been widely investigated. This project focuses on the sources and context of psychosocial daily workplace stress in a real-world setting. The main objective is to analyze and model real-time relationships between (1) psychosocial stress experiences within the natural work environment, (2) micro-level work activities and events, and (3) physiological signals and behaviors in office workers. Methods: An Ecological Momentary Assessment (EMA) protocol has been developed, partly building on machine learning techniques. Empatica® wristbands will be used for real-life detection of stress from physiological signals; micro-level activities and events at work will be based on smartphone registrations, further processed according to an automated computer algorithm. A field study including 100 office-based workers with high-level problem-solving tasks like managers and researchers will be implemented in Slovenia and Belgium (50 in each country). Data mining and state-of-the-art statistical methods – mainly multilevel statistical modelling for repeated data – will be used. Expected Results and Impact: The project findings will provide novel contributions to the field of occupational health research. While traditional assessments provide information about global perceived state of chronic stress exposure, the EMA approach is expected to bring new insights about daily fluctuating work stress experiences, especially micro-level events and activities at work that induce acute physiological stress responses. The project is therefore likely to generate further evidence on relevant stressors in a real-time working environment and hence make it possible to advise on workplace procedures and policies for reducing stress.

Keywords: ecological momentary assessment, real-time, stress, work

Procedia PDF Downloads 139
1562 Exploring Social Impact of Emerging Technologies from Futuristic Data

Authors: Heeyeul Kwon, Yongtae Park

Abstract:

Despite the highly touted benefits, emerging technologies have unleashed pervasive concerns regarding unintended and unforeseen social impacts. Thus, those wishing to create safe and socially acceptable products need to identify such side effects and mitigate them prior to the market proliferation. Various methodologies in the field of technology assessment (TA), namely Delphi, impact assessment, and scenario planning, have been widely incorporated in such a circumstance. However, literatures face a major limitation in terms of sole reliance on participatory workshop activities. They unfortunately missed out the availability of a massive untapped data source of futuristic information flooding through the Internet. This research thus seeks to gain insights into utilization of futuristic data, future-oriented documents from the Internet, as a supplementary method to generate social impact scenarios whilst capturing perspectives of experts from a wide variety of disciplines. To this end, network analysis is conducted based on the social keywords extracted from the futuristic documents by text mining, which is then used as a guide to produce a comprehensive set of detailed scenarios. Our proposed approach facilitates harmonized depictions of possible hazardous consequences of emerging technologies and thereby makes decision makers more aware of, and responsive to, broad qualitative uncertainties.

Keywords: emerging technologies, futuristic data, scenario, text mining

Procedia PDF Downloads 480
1561 Corridor Densification Option as a Means for Restructuring South African Cities

Authors: T. J. B. van Niekerk, J. Viviers, E. J. Cilliers

Abstract:

Substantial efforts were made in South Africa, stemming from a historic political change in 1994, to remedy the inequality and injustice, resulting from a dispensation where spatial patterns were largely based on racial segregation. Spatially distorted patterns predominantly originated from colonialism in the beginning of the twentieth century, ensuing a physical imprint on South African cities relating to architecture, urban layout and planning, frequently reflecting European norms and standards. As a consequence of physical and land use barriers, and well-established dual cities, attempts to address spatial injustices, apart from limited occurrences in metropolitan areas, gravely failed. Interception of incessant segregated growth, combined with urban sprawl is becoming increasingly evident. Intervention is a prerequisite to duly address the impact of colonial planning and its legacy still prevalent in most urban areas. During 1998, the National Department of Transport prepared the “Moving South Africa” strategy; presenting the Corridor Densification Option Model for the first time, as it was deemed more fitting to the existing South African urban tenure patterns than more familiar planning approaches. Urban planners are progressively contemplating the Corridor Densification Option Model and its attributes, besides its transportation emphasis, as an alternative approach to address spatial imbalances and to attain the physical integration of contemporary urban forms. In attaining a clearer understanding of the Corridor Densification Option Model, its rationale was analysed in greater detail. This research further investigated the provisional applications of the model in spatially segregated cities and illustrated that viable options are present to effectively employ it. Research revealed that the application of the model will, however, be dependent on the occurrence of specific characteristics in spatially segregated cities to warrant augmentation thereof.

Keywords: corridor densification option model, spatially segregated settlements, integration, urban restructuring

Procedia PDF Downloads 202
1560 The Challenges of Digital Crime Nowadays

Authors: Bendes Ákos

Abstract:

Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.

Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism

Procedia PDF Downloads 49
1559 Effect of Natural Molecular Crowding on the Structure and Stability of DNA Duplex

Authors: Chaudhari S. G., Saxena, S.

Abstract:

We systematically and quantitatively investigated the effect of glucose as a model of natural molecular crowding agent on the structure and thermodynamics of Watson-Crick base paired three duplexes (named as D1, D2 and D3) of different base compositions and lengths. Structural analyses demonstrated that duplexes (D1 and D2) folded into B-form with different cations in the absence and presence of glucose while duplex (D3) folded into mixed A and B-form. Moreover, we demonstrated that the duplex was more stable in the absence of glucose, and marginally destabilized in its presence because glucose act as a weak structure breaker on the tetrahedral network of water. In the absence of glucose, the values of ΔG°25 for duplex (D1) were -13.56, -13.76, -12.46, and -12.36 kcal/mol, for duplex (D2) were -13.64, -12.93, -12.86, and -12.30 kcal/mol, for duplex (D3) were -10.05, -11.76, -9.91, -9.70 kcal/mol in the presence of Na+, K+, Na+ + Mg++ and K+ + Mg++ respectively. At high concentration of glucose (1:10000), there was increase in ΔG°25 for duplex (D1) -12.47, -12.37, -11.96, -11.55 kcal/mol, for duplex (D2) -12.37, -11.47, -11.98, -11.01 kcal/mol and for duplex (D3) -8.47, -9.17, -9.16, -8.66 kcal/mol. Our results provide the information that structure and stability of DNA duplex depends on the structure of molecular crowding agent present in its close vicinity. In this study, I have taken the hydration of simple sugar as an essential model for understanding interactions between hydrophilic groups and interfacial water molecules and its effect on hydrogen bonded DNA duplexes. On the basis of these relatively simple building blocks I hope to gain some insights for understanding more generally the properties of sugar–water–salt systems with DNA duplexes.

Keywords: natural molecular crowding, DNA Duplex, structure of DNA, bioengineering and life sciences

Procedia PDF Downloads 455
1558 Mesoscopic Defects of Forming and Induced Properties on the Impact of a Composite Glass/Polyester

Authors: Bachir Kacimi, Fatiha Teklal, Arezki Djebbar

Abstract:

Forming processes induce residual deformations on the reinforcement and sometimes lead to mesoscopic defects, which are more recurrent than macroscopic defects during the manufacture of complex structural parts. This study deals with the influence of the fabric shear and buckles defects, which appear during draping processes of composite, on the impact behavior of a glass fiber reinforced polymer. To achieve this aim, we produced several specimens with different amplitude of deformations (shear) and defects on the fabric using a specific bench. The specimens were manufactured using the contact molding and tested with several impact energies. The results and measurements made on tested specimens were compared to those of the healthy material. The results showed that the buckle defects have a negative effect on elastic parameters and revealed a larger damage with significant out-of-plane mode relatively to the healthy composite material. This effect is the consequence of a local fiber impoverishment and a disorganization of the fibrous network, with a reorientation of the fibers following the out-of-plane buckling of the yarns, in the area where the defects are located. For the material with calibrated shear of the reinforcement, the increased local fiber rate due to the shear deformations and the contribution to stiffness of the transverse yarns led to an increase in mechanical properties.

Keywords: Defects, Forming, Impact, Induced properties, Textiles

Procedia PDF Downloads 128
1557 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance

Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang

Abstract:

The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.

Keywords: bone, mice bone, NMR, water in bone

Procedia PDF Downloads 159
1556 Polyacrylates in Poly (Lactic Acid) Matrix, New Biobased Polymer Material

Authors: Irena Vuković-Kwiatkowska, Halina Kaczmarek

Abstract:

Poly (lactic acid) is well known polymer, often called green material because of its origin (renewable resources) and biodegradability. This biopolymer can be used in the packaging industry very often. Poor resistance to permeation of gases is the disadvantage of poly (lactic acid). The permeability of gases and vapor through the films applied for packages and bottles generally should be very low to prolong products shelf-life. We propose innovation method of PLA gas barrier modification using electromagnetic radiation in ultraviolet range. Poly (lactic acid) (PLA) and multifunctional acrylate monomers were mixed in different composition. Final films were obtained by photochemical reaction (photocrosslinking). We tested permeability to water vapor and carbon dioxide through these films. Also their resistance to UV radiation was also studied. The samples were conditioned in the activated sludge and in the natural soil to test their biodegradability. An innovative method of PLA modification allows to expand its usage, and can reduce the future costs of waste management what is the result of consuming such materials like PET and HDPE. Implementation of our material for packaging will contribute to the protection of the environment from the harmful effects of extremely difficult to biodegrade materials made from PET or other plastic

Keywords: interpenetrating polymer network, packaging films, photocrosslinking, polyacrylates dipentaerythritol pentaacrylate DPEPA, poly (lactic acid), polymer biodegradation

Procedia PDF Downloads 463
1555 Integrated Waste-to-Energy Approach: An Overview

Authors: Tsietsi J. Pilusa, Tumisang G. Seodigeng

Abstract:

This study evaluates the benefits of advanced waste management practices in unlocking waste-to-energy opportunities within the solid waste industry. The key drivers of sustainable waste management practices, specifically with respect to packaging waste-to-energy technology options are discussed. The success of a waste-to-energy system depends significantly on the appropriateness of available technologies, including those that are well established as well as those that are less so. There are hard and soft interventions to be considered when packaging an integrated waste treatment solution. Technology compatibility with variation in feedstock (waste) quality and quantities remains a key factor. These factors influence the technology reliability in terms of production efficiencies and product consistency, which in turn, drives the supply and demand network. Waste treatment technologies rely on the waste material as feedstock; the feedstock varies in quality and quantities depending on several factors; hence, the technology fails, as a result. It is critical to design an advanced waste treatment technology in an integrated approach to minimize the possibility of technology failure due to unpredictable feedstock quality, quantities, conversion efficiencies, and inconsistent product yield or quality. An integrated waste-to-energy approach offers a secure system design that considers sustainable waste management practices.

Keywords: emerging markets, evaluation tool, interventions, waste treatment technologies

Procedia PDF Downloads 256
1554 Learning, Teaching and Assessing Students’ ESP Skills via Exe and Hot Potatoes Software Programs

Authors: Naira Poghosyan

Abstract:

In knowledge society the content of the studies, the methods used and the requirements for an educator’s professionalism regularly undergo certain changes. It follows that in knowledge society the aim of education is not only to educate professionals for a certain field but also to help students to be aware of cultural values, form human mutual relationship, collaborate, be open, adapt to the new situation, creatively express their ideas, accept responsibility and challenge. In this viewpoint, the development of communicative language competence requires a through coordinated approach to ensure proper comprehension and memorization of subject-specific words starting from high school level. On the other hand, ESP (English for Specific Purposes) teachers and practitioners are increasingly faced with the task of developing and exploiting new ways of assessing their learners’ literacy while learning and teaching ESP. The presentation will highlight the latest achievements in this field. The author will present some practical methodological issues and principles associated with learning, teaching and assessing ESP skills of the learners, using the two software programs of EXE 2.0 and Hot Potatoes 6. On the one hand the author will display the advantages of the two programs as self-learning and self-assessment interactive tools in the course of academic study and professional development of the CLIL learners, on the other hand, she will comprehensively shed light upon some methodological aspects of working out appropriate ways of selection, introduction, consolidation of subject specific materials via EXE 2.0 and Hot Potatoes 6. Then the author will go further to distinguish ESP courses by the general nature of the learners’ specialty identifying three large categories of EST (English for Science and Technology), EBE (English for Business and Economics) and ESS (English for the Social Sciences). The cornerstone of the presentation will be the introduction of the subject titled “The methodology of teaching ESP in non-linguistic institutions”, where a unique case of teaching ESP on Architecture and Construction via EXE 2.0 and Hot Potatoes 6 will be introduced, exemplifying how the introduction, consolidation and assessment can be used as a basis for feedback to the ESP learners in a particular professional field.

Keywords: ESP competences, ESP skill assessment/ self-assessment tool, eXe 2.0 / HotPotatoes software program, ESP teaching strategies and techniques

Procedia PDF Downloads 365
1553 3-D Strain Imaging of Nanostructures Synthesized via CVD

Authors: Sohini Manna, Jong Woo Kim, Oleg Shpyrko, Eric E. Fullerton

Abstract:

CVD techniques have emerged as a promising approach in the formation of a broad range of nanostructured materials. The realization of many practical applications will require efficient and economical synthesis techniques that preferably avoid the need for templates or costly single-crystal substrates and also afford process adaptability. Towards this end, we have developed a single-step route for the reduction-type synthesis of nanostructured Ni materials using a thermal CVD method. By tuning the CVD growth parameters, we can synthesize morphologically dissimilar nanostructures including single-crystal cubes and Au nanostructures which form atop untreated amorphous SiO2||Si substrates. An understanding of the new properties that emerge in these nanostructures materials and their relationship to function will lead to for a broad range of magnetostrictive devices as well as other catalysis, fuel cell, sensor, and battery applications based on high-surface-area transition-metal nanostructures. We use coherent X-ray diffraction imaging technique to obtain 3-D image and strain maps of individual nanocrystals. Coherent x-ray diffractive imaging (CXDI) is a technique that provides the overall shape of a nanostructure and the lattice distortion based on the combination of highly brilliant coherent x-ray sources and phase retrieval algorithm. We observe a fine interplay of reduction of surface energy vs internal stress, which plays an important role in the morphology of nano-crystals. The strain distribution is influenced by the metal-substrate interface and metal-air interface, which arise due to differences in their thermal expansion. We find the lattice strain at the surface of the octahedral gold nanocrystal agrees well with the predictions of the Young-Laplace equation quantitatively, but exhibits a discrepancy near the nanocrystal-substrate interface resulting from the interface. The strain in the bottom side of the Ni nanocube, which is contacted on the substrate surface is compressive. This is caused by dissimilar thermal expansion coefficients between Ni nanocube and Si substrate. Research at UCSD support by NSF DMR Award # 1411335.

Keywords: CVD, nanostructures, strain, CXRD

Procedia PDF Downloads 378
1552 Kirigami Designs for Enhancing the Electromechanical Performance of E-Textiles

Authors: Braden M. Li, Inhwan Kim, Jesse S. Jur

Abstract:

One of the fundamental challenges in the electronic textile (e-textile) industry is the mismatch in compliance between the rigid electronic components integrated onto soft textile platforms. To address these problems, various printing technologies using conductive inks have been explored in an effort to improve the electromechanical performance without sacrificing the innate properties of the printed textile. However, current printing methods deposit densely layered coatings onto textile surfaces with low through-plane wetting resulting in poor electromechanical properties. This work presents an inkjet printing technique in conjunction with unique Kirigami cut designs to address these issues for printed smart textiles. By utilizing particle free reactive silver inks, our inkjet process produces conformal and micron thick silver coatings that surround individual fibers of the printed smart textile. This results in a highly conductive (0.63 Ω sq-1) printed e-textile while also maintaining the innate properties of the textile material including stretchability, flexibility, breathability and fabric hand. Kirigami is the Japanese art of paper cutting. By utilizing periodic cut designs, Kirigami imparts enhanced flexibility and delocalization of stress concentrations. Kirigami cut design parameters (i.e., cut spacing and length) were correlated to both the mechanical and electromechanical properties of the printed textiles. We demonstrate that designs using a higher cut-out ratio exponentially softens the textile substrate. Thus, our designs achieve a 30x improvement in the overall stretchability, 1000x decrease in elastic modulus, and minimal resistance change over strain regimes of 100-200% when compared to uncut designs. We also show minimal resistance change of our Kirigami inspired printed devices after being stretched to 100% for 1000 cycles. Lastly, we demonstrate a Kirigami-inspired electrocardiogram (ECG) monitoring system that improves stretchability without sacrificing signal acquisition performance. Overall this study suggests fundamental parameters affecting the performance of e-textiles and their scalability in the wearable technology industry

Keywords: kirigami, inkjet printing, flexible electronics, reactive silver ink

Procedia PDF Downloads 125
1551 Artificial Bee Colony Optimization for SNR Maximization through Relay Selection in Underlay Cognitive Radio Networks

Authors: Babar Sultan, Kiran Sultan, Waseem Khan, Ijaz Mansoor Qureshi

Abstract:

In this paper, a novel idea for the performance enhancement of secondary network is proposed for Underlay Cognitive Radio Networks (CRNs). In Underlay CRNs, primary users (PUs) impose strict interference constraints on the secondary users (SUs). The proposed scheme is based on Artificial Bee Colony (ABC) optimization for relay selection and power allocation to handle the highlighted primary challenge of Underlay CRNs. ABC is a simple, population-based optimization algorithm which attains global optimum solution by combining local search methods (Employed and Onlooker Bees) and global search methods (Scout Bees). The proposed two-phase relay selection and power allocation algorithm aims to maximize the signal-to-noise ratio (SNR) at the destination while operating in an underlying mode. The proposed algorithm has less computational complexity and its performance is verified through simulation results for a different number of potential relays, different interference threshold levels and different transmit power thresholds for the selected relays.

Keywords: artificial bee colony, underlay spectrum sharing, cognitive radio networks, amplify-and-forward

Procedia PDF Downloads 561
1550 Legal Regulation of Personal Information Data Transmission Risk Assessment: A Case Study of the EU’s DPIA

Authors: Cai Qianyi

Abstract:

In the midst of global digital revolution, the flow of data poses security threats that call China's existing legislative framework for protecting personal information into question. As a preliminary procedure for risk analysis and prevention, the risk assessment of personal data transmission lacks detailed guidelines for support. Existing provisions reveal unclear responsibilities for network operators and weakened rights for data subjects. Furthermore, the regulatory system's weak operability and a lack of industry self-regulation heighten data transmission hazards. This paper aims to compare the regulatory pathways for data information transmission risks between China and Europe from a legal framework and content perspective. It draws on the “Data Protection Impact Assessment Guidelines” to empower multiple stakeholders, including data processors, controllers, and subjects, while also defining obligations. In conclusion, this paper intends to solve China's digital security shortcomings by developing a more mature regulatory framework and industry self-regulation mechanisms, resulting in a win-win situation for personal data protection and the development of the digital economy.

Keywords: personal information data transmission, risk assessment, DPIA, internet service provider, personal information data transimission, risk assessment

Procedia PDF Downloads 38
1549 Copper Phthalocyanine Nanostructures: A Potential Material for Field Emission Display

Authors: Uttam Kumar Ghorai, Madhupriya Samanta, Subhajit Saha, Swati Das, Nilesh Mazumder, Kalyan Kumar Chattopadhyay

Abstract:

Organic semiconductors have gained potential interest in the last few decades for their significant contributions in the various fields such as solar cell, non-volatile memory devices, field effect transistors and light emitting diodes etc. The most important advantages of using organic materials are mechanically flexible, light weight and low temperature depositing techniques. Recently with the advancement of nanoscience and technology, one dimensional organic and inorganic nanostructures such as nanowires, nanorods, nanotubes have gained tremendous interests due to their very high aspect ratio and large surface area for electron transport etc. Among them, self-assembled organic nanostructures like Copper, Zinc Phthalocyanine have shown good transport property and thermal stability due to their π conjugated bonds and π-π stacking respectively. Field emission properties of inorganic and carbon based nanostructures are reported in literatures mostly. But there are few reports in case of cold cathode emission characteristics of organic semiconductor nanostructures. In this work, the authors report the field emission characteristics of chemically and physically synthesized Copper Phthalocyanine (CuPc) nanostructures such as nanowires, nanotubes and nanotips. The as prepared samples were characterized by X-Ray diffraction (XRD), Ultra Violet Visible Spectrometer (UV-Vis), Fourier Transform Infra-red Spectroscopy (FTIR), and Field Emission Scanning Electron Microscope (FESEM) and Transmission Electron Microscope (TEM). The field emission characteristics were measured in our home designed field emission set up. The registered turn-on field and local field enhancement factor are found to be less than 5 V/μm and greater than 1000 respectively. The field emission behaviour is also stable for 200 minute. The experimental results are further verified by theoretically using by a finite displacement method as implemented in ANSYS Maxwell simulation package. The obtained results strongly indicate CuPc nanostructures to be the potential candidate as an electron emitter for field emission based display device applications.

Keywords: organic semiconductor, phthalocyanine, nanowires, nanotubes, field emission

Procedia PDF Downloads 487
1548 Metabolomics Profile Recognition for Cancer Diagnostics

Authors: Valentina L. Kouznetsova, Jonathan W. Wang, Igor F. Tsigelny

Abstract:

Metabolomics has become a rising field of research for various diseases, particularly cancer. Increases or decreases in metabolite concentrations in the human body are indicative of various cancers. Further elucidation of metabolic pathways and their significance in cancer research may greatly spur medicinal discovery. We analyzed the metabolomics profiles of lung cancer. Thirty-three metabolites were selected as significant. These metabolites are involved in 37 metabolic pathways delivered by MetaboAnalyst software. The top pathways are glyoxylate and dicarboxylate pathway (its hubs are formic acid and glyoxylic acid) along with Citrate cycle pathway followed by Taurine and hypotaurine pathway (the hubs in the latter are taurine and sulfoacetaldehyde) and Glycine, serine, and threonine pathway (the hubs are glycine and L-serine). We studied interactions of the metabolites with the proteins involved in cancer-related signaling networks, and developed an approach to metabolomics biomarker use in cancer diagnostics. Our analysis showed that a significant part of lung-cancer-related metabolites interacts with main cancer-related signaling pathways present in this network: PI3K–mTOR–AKT pathway, RAS–RAF–ERK1/2 pathway, and NFKB pathway. These results can be employed for use of metabolomics profiles in elucidation of the related cancer proteins signaling networks.

Keywords: cancer, metabolites, metabolic pathway, signaling pathway

Procedia PDF Downloads 383
1547 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 432
1546 Design of Direct Power Controller for a High Power Neutral Point Clamped Converter Using Real-Time Simulator

Authors: Amin Zabihinejad, Philippe Viarouge

Abstract:

In this paper, a direct power control (DPC) strategies have been investigated in order to control a high power AC/DC converter with time variable load. This converter is composed of a three level three phase neutral point clamped (NPC) converter as rectifier and an H-bridge four quadrant current control converter. In the high power application, controller not only must adjust the desired outputs but also decrease the level of distortions which are injected to the network from the converter. Regarding this reason and nonlinearity of the power electronic converter, the conventional controllers cannot achieve appropriate responses. In this research, the precise mathematical analysis has been employed to design the appropriate controller in order to control the time variable load. A DPC controller has been proposed and simulated using Matlab/Simulink. In order to verify the simulation result, a real-time simulator- OPAL-RT- has been employed. In this paper, the dynamic response and stability of the high power NPC with variable load has been investigated and compared with conventional types using a real-time simulator. The results proved that the DPC controller is more stable and has more precise outputs in comparison with the conventional controller.

Keywords: direct power control, three level rectifier, real time simulator, high power application

Procedia PDF Downloads 503