Search results for: Dense Networks
1922 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 1901921 Advancements in Dielectric Materials: A Comprehensive Study on Properties, Synthesis, and Applications
Authors: M. Mesrar, T. Lamcharfi, Nor-S. Echatoui, F. Abdi
Abstract:
The solid-state reaction method was used to synthesize ferroelectric systems with lead-free properties, specifically (1-x-y)(Na₀.₅Bi₀.₅)TiO₃-xBaTiO₃-y(K₀.₅ Bi₀.₅)TiO₃. To achieve a pure perovskite phase, the optimal calcination temperature was determined to be 1000°C for 4 hours. X-ray diffraction (XRD) analysis identified the presence of the morphotropic phase boundary (MPB) in the (1-x-y)NBT xBT-yKBT ceramics for specific molar compositions, namely (0.95NBT-0.05BT, 0.84NBT-0.16KBT, and 0.79NBT-0.05BT-0.16KBT). To enhance densification, the sintering temperature was set at 1100°C for 4 hours. Scanning electron microscopy (SEM) images exhibited homogeneous distribution and dense packing of the grains in the ceramics, indicating a uniform microstructure. These materials exhibited favorable characteristics, including high dielectric permittivity, low dielectric loss, and diffused phase transition behavior. The ceramics composed of 0.79NBT-0.05BT-0.16KBT exhibited the highest piezoelectric constant (d33=148 pC/N) and electromechanical coupling factor (kp = 0.292) among all compositions studied. This enhancement in piezoelectric properties can be attributed to the presence of the morphotropic phase boundary (MPB) in the material. This study presents a comprehensive approach to improving the performance of lead-free ferroelectric systems of composition 0.79(Na₀.₅Bi₀.₅)Ti O₃-0.05BaTiO₃-0.16(K₀.₅Bi₀.₅)TiO₃.Keywords: solid-state method, (1-x-y)NBT-xBT-yKBT, morphotropic phase boundary, Raman spectroscopy, dielectric properties
Procedia PDF Downloads 561920 Body Armours in Amazonian Fish
Authors: Fernando G. Torres, Donna M. Ebenstein, Monica Merino
Abstract:
Most fish are covered by a protective external armour. The characteristics of these armours depend on the individual elements that form them, such as scales, scutes or dermal plates. In this work, we assess the properties of two different types of protective elements: scales from A. gigas and dermal plates from P. pardalis. A. Gigas and P. Pardalis are two Amazonian fish with a rather prehistoric aspect. They have large scales and dermal plates that form two different types of protective body armours. Although both scales and dermal plates are formed by collagen and hydroxyapatite, their structures display remarkable differences. The structure and composition of the samples were assessed by means of X-ray diffraction (XRD), Fourier Transform Infrared spectroscopy (FTIR) and Differential Scanning Calorimetry (DSC). Morphology studies were carried out using a Scanning Electron Microscopy (SEM). Nanoindentation tests were performed to measure the reduced moduli in A. gigas scales and P. pardalis plates. The similarities and differences between scales and dermal plates are discussed based on the experimental results. Both protective armours are designed to be lightweight, flexible and tough. A. Gigas scales are are light laminated composites, while P. pardalis dermal plates show a sandwich like structure with dense outer layers and a porous inner matrix. It seems that the armour of P. pardalis is more suited for a bottom-dwelling fish and allows for protection against predators. The scales from A. Gigas are more adapted to give protection to a swimming fish. The information obtained from these studies is also important for the development of bioinspired nanocomposites, with potential applications in the biomedical field.Keywords: pterygoplichthys pardalis, dermal plates arapaima gigas, fish scales
Procedia PDF Downloads 3931919 Noise and Thermal Analyses of Memristor-Based Phase Locked Loop Integrated Circuit
Authors: Naheem Olakunle Adesina
Abstract:
The memristor is considered as one of the promising candidates for mamoelectronic engineering and applications. Owing to its high compatibility with CMOS, nanoscale size, and low power consumption, memristor has been employed in the design of commonly used circuits such as phase-locked loop (PLL). In this paper, we designed a memristor-based loop filter (LF) together with other components of PLL. Following this, we evaluated the noise-rejection feature of loop filter by comparing the noise levels of input and output signals of the filter. Our SPICE simulation results showed that memristor behaves like a linear resistor at high frequencies. The result also showed that loop filter blocks the high-frequency components from phase frequency detector so as to provide a stable control voltage to the voltage controlled oscillator (VCO). In addition, we examined the effects of temperature on the performance of the designed phase locked loop circuit. A critical temperature, where there is frequency drift of VCO as a result of variations in control voltage, is identified. In conclusion, the memristor is a suitable choice for nanoelectronic systems owing to a small area, low power consumption, dense nature, high switching speed, and endurance. The proposed memristor-based loop filter, together with other components of the phase locked loop, can be designed using memristive emulator and EDA tools in current CMOS technology and simulated.Keywords: Fast Fourier Transform, hysteresis curve, loop filter, memristor, noise, phase locked loop, voltage controlled oscillator
Procedia PDF Downloads 1921918 Forecast Financial Bubbles: Multidimensional Phenomenon
Authors: Zouari Ezzeddine, Ghraieb Ikram
Abstract:
From the results of the academic literature which evokes the limitations of previous studies, this article shows the reasons for multidimensionality Prediction of financial bubbles. A new framework for modeling study predicting financial bubbles by linking a set of variable presented on several dimensions dictating its multidimensional character. It takes into account the preferences of financial actors. A multicriteria anticipation of the appearance of bubbles in international financial markets helps to fight against a possible crisis.Keywords: classical measures, predictions, financial bubbles, multidimensional, artificial neural networks
Procedia PDF Downloads 5811917 Older Consumer’s Willingness to Trust Social Media Advertising: A Case of Australian Social Media Users
Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant
Abstract:
Social media networks have become the hotbed for advertising activities due mainly to their increasing consumer/user base and, secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional media, such as broadcast media and print media, and, more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilized as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: (1) Gen Z/Millennials Reliability = 4.90/7 vs. Gen X/Boomers Reliability = 4.34/7; (2) Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and (3) Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioral intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users in an attempt to foster positive behavioral responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.Keywords: social media advertising, trust, older consumers, internet studies
Procedia PDF Downloads 471916 Synthesis and Characterisation of Starch-PVP as Encapsulation Material for Drug Delivery System
Authors: Nungki Rositaningsih, Emil Budianto
Abstract:
Starch has been widely used as an encapsulation material for drug delivery system. However, starch hydrogel is very easily degraded during metabolism in human stomach. Modification of this material is needed to improve the encapsulation process in drug delivery system, especially for gastrointestinal drug. In this research, three modified starch-based hydrogels are synthesized i.e. Crosslinked starch hydrogel, Semi- and Full- Interpenetrating Polymer Network (IPN) starch hydrogel using Poly(N-Vinyl-Pyrrolidone). Non-modified starch hydrogel was also synthesized as a control. All of those samples were compared as biomaterials, floating drug delivery, and their ability in loading drug test. Biomaterial characterizations were swelling test, stereomicroscopy observation, Differential Scanning Calorimetry (DSC), and Fourier Transform Infrared Spectroscopy (FTIR). Buoyancy test and stereomicroscopy scanning were done for floating drug delivery characterizations. Lastly, amoxicillin was used as test drug, and characterized with UV-Vis spectroscopy for loading drug observation. Preliminary observation showed that Full-IPN has the most dense and elastic texture, followed by Semi-IPN, Crosslinked, and Non-modified in the last position. Semi-IPN and Crosslinked starch hydrogel have the most ideal properties and will not be degraded easily during metabolism. Therefore, both hydrogels could be considered as promising candidates for encapsulation material. Further analysis and issues will be discussed in the paper.Keywords: biomaterial, drug delivery system, interpenetrating polymer network, poly(N-vinyl-pyrrolidone), starch hydrogel
Procedia PDF Downloads 2561915 Incentive Policies to Promote Green Infrastructure in Urban Jordan
Authors: Zayed Freah Zeadat
Abstract:
The wellbeing of urban dwellers is strongly associated with the quality and quantity of green infrastructure. Nevertheless, urban green infrastructure is still lagging in many Arab cities, and Jordan is no exception. The capital city of Jordan, Amman, is becoming more urban dense with limited green spaces. The unplanned urban growth in Amman has caused several environmental problems such as urban heat islands, air pollution, and lack of green spaces. This study aims to investigate the most suitable drivers to leverage the implementation of urban green infrastructure in Jordan through qualitative and quantitative analysis. The qualitative research includes an extensive literature review to discuss the most common drivers used internationally to promote urban green infrastructure implementation in the literature. The quantitative study employs a questionnaire survey to rank the suitability of each driver. Consultants, contractors, and policymakers were invited to fill the research questionnaire according to their judgments and opinions. Relative Importance Index has been used to calculate the weighted average of all drivers and the Kruskal-Wallis test to check the degree of agreement among groups. This study finds that research participants agreed that indirect financial incentives (i.e., tax reductions, reduction in stormwater utility fee, reduction of interest rate, density bonus, etc.) are the most effective incentive policy whilst granting sustainability certificate policy is the least effective driver to ensure widespread of UGI is elements in Jordan.Keywords: urban green infrastructure, relative importance index, sustainable urban development, urban Jordan
Procedia PDF Downloads 1601914 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals
Authors: Linghui Meng, James Atlas, Deborah Munro
Abstract:
There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers
Procedia PDF Downloads 431913 Molecular Dynamics Simulation of the Effect of the Solid Gas Interface Nanolayer on Enhanced Thermal Conductivity of Copper-CO2 Nanofluid
Authors: Zeeshan Ahmed, Ajinkya Sarode, Pratik Basarkar, Atul Bhargav, Debjyoti Banerjee
Abstract:
The use of CO2 in oil recovery and in CO2 capture and storage is gaining traction in recent years. These applications involve heat transfer between CO2 and the base fluid, and hence, there arises a need to improve the thermal conductivity of CO2 to increase the process efficiency and reduce cost. One way to improve the thermal conductivity is through nanoparticle addition in the base fluid. The nanofluid model in this study consisted of copper (Cu) nanoparticles in varying concentrations with CO2 as a base fluid. No experimental data are available on thermal conductivity of CO2 based nanofluid. Molecular dynamics (MD) simulations are an increasingly adopted tool to perform preliminary assessments of nanoparticle (NP) fluid interactions. In this study, the effect of the formation of a nanolayer (or molecular layering) at the gas-solid interface on thermal conductivity is investigated using equilibrium MD simulations by varying NP diameter and keeping the volume fraction (1.413%) of nanofluid constant to check the diameter effect of NP on the nanolayer and thermal conductivity. A dense semi-solid fluid layer was seen to be formed at the NP-gas interface, and the thickness increases with increase in particle diameter, which also moves with the NP Brownian motion. Density distribution has been done to see the effect of nanolayer, and its thickness around the NP. These findings are extremely beneficial, especially to industries employed in oil recovery as increased thermal conductivity of CO2 will lead to enhanced oil recovery and thermal energy storage.Keywords: copper-CO2 nanofluid, molecular dynamics simulation, molecular interfacial layer, thermal conductivity
Procedia PDF Downloads 3401912 An Analysis of Twitter Use of Slow Food Movement in the Context of Online Activism
Authors: Kubra Sultan Yuzuncuyil, Aytekin İsman, Berkay Bulus
Abstract:
With the developments of information and communication technologies, the forms of molding public opinion have changed. In the presence of Internet, the notion of activism has been endowed with digital codes. Activists have engaged the use of Internet into their campaigns and the process of creating collective identity. Activist movements have been incorporating the relevance of new communication technologies for their goals and opposition. Creating and managing activism through Internet is called Online Activism. In this main, Slow Food Movement which was emerged within the philosophy of defending regional, fair and sustainable food has been engaging Internet into their activist campaign. This movement supports the idea that a new food system which allows strong connections between plate and planet is possible. In order to make their voices heard, it has utilized social networks and develop particular skills in the framework online activism. This study analyzes online activist skills of Slow Food Movement (SFM) develop and attempts to measure its effectiveness. To achieve this aim, it adopts the model proposed by Sivitandies and Shah and conduct both qualitiative and quantiative content analysis on social network use of Slow Food Movement. In this regard, the sample is chosen as the official profile and analyzed between in a three month period respectively March-May 2017. It was found that SFM develops particular techniques that appeal to the model of Sivitandies and Shah. The prominent skill in this regard was found as hyperlink abbreviation and use of multimedia elements. On the other hand, there are inadequacies in hashtag and interactivity use. The importance of this study is that it highlights and discusses how online activism can be engaged into a social movement. It also reveals current online activism skills of SFM and their effectiveness. Furthermore, it makes suggestions to enhance the related abilities and strengthen its voice on social networks.Keywords: slow food movement, Twitter, internet, online activism
Procedia PDF Downloads 2851911 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 1431910 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 2231909 The Corrosion Resistance of P/M Alumix 431D Compacts
Authors: J. Kazior, A. Szewczyk-Nykiel, T. Pieczonka, M. Laska
Abstract:
Aluminium alloys are an important class of engineering materials for structural applications. This is due to the fact that these alloys have many interesting properties, namely, low density, high ratio of strength to density, good thermal and electrical conductivity, good corrosion resistance as well as extensive capabilities for shaping processes. In case of classical PM technology a particular attention should be paid to the selection of appropriate parameters of compacting and sintering processes and to keeping them. The latter need arises from the high sensitivity of aluminium based alloy powders on any fluctuation of technological parameters, in particular those related to the temperature-time profile and gas flow. Only then the desired sintered compacts with residual porosity may be produced. Except high mechanical properties, the other profitable properties of almost fully dense sintered components could be expected. Among them is corrosion resistance, rarely investigated on PM aluminium alloys. Thus, in the current study the Alumix 431/D commercial, press-ready grade powder was used for this purpose. Sintered compacts made of it in different conditions (isothermal sintering temperature, gas flow rate) were subjected to corrosion experiments in 0,1 M and 0,5 M NaCl solutions. The potentiodynamic curves were used to establish parameters characterising the corrosion resistance of sintered Alumix 431/D powder, namely, the corrosion potential, the corrosion current density, the polarization resistance, the breakdown potential. The highest value of polarization resistance, the lowest value of corrosion current density and the most positive corrosion potential was obtained for Alumix431/D powder sintered at 600°C and for highest protective gas flow rate.Keywords: aluminium alloys, sintering, corrosion resistance, industry
Procedia PDF Downloads 3521908 Increasing Power Transfer Capacity of Distribution Networks Using Direct Current Feeders
Authors: Akim Borbuev, Francisco de León
Abstract:
Economic and population growth in densely-populated urban areas introduce major challenges to distribution system operators, planers, and designers. To supply added loads, utilities are frequently forced to invest in new distribution feeders. However, this is becoming increasingly more challenging due to space limitations and rising installation costs in urban settings. This paper proposes the conversion of critical alternating current (ac) distribution feeders into direct current (dc) feeders to increase the power transfer capacity by a factor as high as four. Current trends suggest that the return of dc transmission, distribution, and utilization are inevitable. Since a total system-level transformation to dc operation is not possible in a short period of time due to the needed huge investments and utility unreadiness, this paper recommends that feeders that are expected to exceed their limits in near future are converted to dc. The increase in power transfer capacity is achieved through several key differences between ac and dc power transmission systems. First, it is shown that underground cables can be operated at higher dc voltage than the ac voltage for the same dielectric stress in the insulation. Second, cable sheath losses, due to induced voltages yielding circulation currents, that can be as high as phase conductor losses under ac operation, are not present under dc. Finally, skin and proximity effects in conductors and sheaths do not exist in dc cables. The paper demonstrates that in addition to the increased power transfer capacity utilities substituting ac feeders by dc feeders could benefit from significant lower costs and reduced losses. Installing dc feeders is less expensive than installing new ac feeders even when new trenches are not needed. Case studies using the IEEE 342-Node Low Voltage Networked Test System quantify the technical and economic benefits of dc feeders.Keywords: DC power systems, distribution feeders, distribution networks, power transfer capacity
Procedia PDF Downloads 1321907 Early Diagnosis and Treatment of Cancer Using Synthetic Cationic Peptide
Authors: D. J. Kalita
Abstract:
Cancer is one of the prime causes of early death worldwide. Mutation of the gene involve in DNA repair and damage, like BRCA2 (Breast cancer gene two) genes, can be detected efficiently by PCR-RFLP to early breast cancer diagnosis and adopt the suitable method of treatment. Host Defense Peptide can be used as blueprint for the design and synthesis of novel anticancer drugs to avoid the side effect of conventional chemotherapy and chemo resistance. The change at nucleotide position 392 of a -› c in the cancer sample of dog mammary tumour at BRCA2 (exon 7) gene lead the creation of a new restriction site for SsiI restriction enzyme. This SNP may be a marker for detection of canine mammary tumour. Support vector machine (SVM) algorithm was used to design and predict the anticancer peptide from the mature functional peptide. MTT assay of MCF-7 cell line after 48 hours of post treatment showed an increase in the number of rounded cells when compared with untreated control cells. The ability of the synthesized peptide to induce apoptosis in MCF-7 cells was further investigated by staining the cells with the fluorescent dye Hoechst stain solution, which allows the evaluation of the nuclear morphology. Numerous cells with dense, pyknotic nuclei (the brighter fluorescence) were observed in treated but not in control MCF-7 cells when viewed using an inverted phase-contrast microscope. Thus, PCR-RFLP is one of the attractive approach for early diagnosis, and synthetic cationic peptide can be used for the treatment of canine mammary tumour.Keywords: cancer, cationic peptide, host defense peptides, Breast cancer genes
Procedia PDF Downloads 951906 An Exploration of Cyberspace Security, Strategy for a New Era
Authors: Laxmi R. Kasaraneni
Abstract:
The Internet connects all the networks, including the nation’s critical infrastructure that are used extensively by not only a nation’s government and military to protect sensitive information and execute missions, but also the primary infrastructure that provides services that enable modern conveniences such as education, potable water, electricity, natural gas, and financial transactions. It has become the central nervous system for the government, the citizens, and the industries. When it is attacked, the effects can ripple far and wide impacts not only to citizens’ well-being but nation’s economy, civil infrastructure, and national security. As such, these critical services may be targeted by malicious hackers during cyber warfare, it is imperative to not only protect them and mitigate any immediate or potential threats, but to also understand the current or potential impacts beyond the IT networks or the organization. The Nation’s IT infrastructure which is now vital for communication, commerce, and control of our physical infrastructure, is highly vulnerable to attack. While existing technologies can address some vulnerabilities, fundamentally new architectures and technologies are needed to address the larger structural insecurities of an infrastructure developed in a more trusting time when mass cyber attacks were not foreseen. This research is intended to improve the core functions of the Internet and critical-sector information systems by providing a clear path to create a safe, secure, and resilient cyber environment that help stakeholders at all levels of government, and the private sector work together to develop the cybersecurity capabilities that are key to our economy, national security, and public health and safety. This research paper also emphasizes the present and future cyber security threats, the capabilities and goals of cyber attackers, a strategic concept and steps to implement cybersecurity for maximum effectiveness, enabling technologies, some strategic assumptions and critical challenges, and the future of cyberspace.Keywords: critical challenges, critical infrastructure, cyber security, enabling technologies, national security
Procedia PDF Downloads 2991905 From Homogeneous to Phase Separated UV-Cured Interpenetrating Polymer Networks: Influence of the System Composition on Properties and Microstructure
Authors: Caroline Rocco, Feyza Karasu, Céline Croutxé-Barghorn, Xavier Allonas, Maxime Lecompère, Gérard Riess, Yujing Zhang, Catarina Esteves, Leendert van der Ven, Rolf van Benthem Gijsbertus de With
Abstract:
Acrylates are widely used in UV-curing technology. Their high reactivity can, however, limit their conversion due to early vitrification. In addition, the free radical photopolymerization is known to be sensitive to oxygen inhibition leading to tacky surfaces. Although epoxides can lead to full polymerization, they are sensitive to humidity and exhibit low polymerization rate. To overcome the intrinsic limitations of both classes of monomers, Interpenetrating Polymer Networks (IPNs) can be synthesized. They consist of at least two cross linked polymers which are permanently entangled. They can be achieved under thermal and/or light induced polymerization in one or two steps approach. IPNs can display homogeneous to heterogeneous morphologies with various degrees of phase separation strongly linked to the monomer miscibility and also synthesis parameters. In this presentation, we synthesize UV-cured methacrylate - epoxide based IPNs with different chemical compositions in order to get a better understanding of their formation and phase separation. Miscibility before and during the photopolymerization, reaction kinetics, as well as mechanical properties and morphology have been investigated. The key parameters controlling the morphology and the phase separation, namely monomer miscibility and synthesis parameters have been identified. By monitoring the stiffness changes on the film surface, atomic force acoustic microscopy (AFAM) gave, in conjunction with polymerization kinetic profiles and thermomechanical properties, explanations and corroborated the miscibility predictions. When varying the methacrylate / epoxide ratio, it was possible to move from a miscible and highly-interpenetrated IPN to a totally immiscible and phase-separated one.Keywords: investigation of properties and morphology, kinetics, phase separation, UV-cured IPNs
Procedia PDF Downloads 3721904 Computational Fluid Dynamics (CFD) Simulation Approach for Developing New Powder Dispensing Device
Authors: Revanth Rallapalli
Abstract:
Manually dispensing solids and powders can be difficult as it requires gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in development of such devices saving time and money by reducing the number of prototypes and testing. Furthermore, this paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to trocar’s end side is done by rotation of the screw conveyor. Thus, the performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and also at the effective area within a quick turnaround time frame.Keywords: DDPM-KTGF, gas-solids multiphase flow, screw conveyor, Unsteady
Procedia PDF Downloads 1841903 Thermally Stable Nanocrystalline Aluminum Alloys Processed by Mechanical Alloying and High Frequency Induction Heat Sintering
Authors: Hany R. Ammar, Khalil A. Khalil, El-Sayed M. Sherif
Abstract:
The as-received metal powders were used to synthesis bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys using mechanical alloying and high frequency induction heat sintering (HFIHS). The current study investigated the influence of milling time and ball-to-powder (BPR) weight ratio on the microstructural constituents and mechanical properties of the processed materials. Powder consolidation was carried out using a high frequency induction heat sintering where the processed metal powders were sintered into a dense and strong bulk material. The sintering conditions applied in this process were as follow: heating rate of 350°C/min; sintering time of 4 minutes; sintering temperature of 400°C; applied pressure of 750 Kgf/cm2 (100 MPa); cooling rate of 400°C/min and the process was carried out under vacuum of 10-3 Torr. The powders and the bulk samples were characterized using XRD and FEGSEM techniques. The mechanical properties were evaluated at various temperatures of 25°C, 100°C, 200°C, 300°C and 400°C to study the thermal stability of the processed alloys. The bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys displayed extremely high hardness values even at elevated temperatures. The Al-10%Cu-5%Ti alloy displayed the highest hardness values at room and elevated temperatures which are related to the presence of Ti-containing phases such as Al3Ti and AlCu2Ti, these phases are thermally stable and retain the high hardness values at elevated temperatures up to 400ºC.Keywords: nanocrystalline aluminum alloys, mechanical alloying, hardness, elevated temperatures
Procedia PDF Downloads 4571902 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs
Authors: Anika Chebrolu
Abstract:
Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.Keywords: drug design, multitargeticity, de-novo, reinforcement learning
Procedia PDF Downloads 1031901 Infrastructure Development – Stages in Development
Authors: Seppo Sirkemaa
Abstract:
Information systems infrastructure is the basis of business systems and processes in the company. It should be a reliable platform for business processes and activities but also have the flexibility to change business needs. The development of an infrastructure that is robust, reliable, and flexible is a challenge. Understanding technological capabilities and business needs is a key element in the development of successful information systems infrastructure.Keywords: development, information technology, networks, technology
Procedia PDF Downloads 1251900 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience
Authors: Al-Amin, Huanjun Jiang, Anayat Ali
Abstract:
Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network
Procedia PDF Downloads 881899 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence
Procedia PDF Downloads 1301898 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 2761897 The Effect of Grading Characteristics on the Shear Strength and Mechanical Behavior of Granular Classes of Sand-Silt
Authors: Youssouf Benmeriem
Abstract:
Shear strength of sandy soils has been considered as the important parameter to study the stability of different civil engineering structures when subjected to monotonic, cyclic and earthquake loading conditions. The proposed research investigated the effect of grading characteristics on the shear strength and mechanical behavior of granular classes of sands mixed with silt in loose and dense states (Dr = 15% and 90%). The laboratory investigation aimed at understanding the extent or degree at which shear strength of sand-silt mixture soil is affected by its gradation under static loading conditions. For the purpose of clarifying and evaluating the shear strength characteristics of sandy soils, a series of Casagrande shear box tests were carried out on different reconstituted samples of sand-silt mixtures with various gradations. The soil samples were tested under different normal stresses (100, 200 and 300 kPa). The results from this laboratory investigation were used to develop insight into the shear strength response of sand and sand-silt mixtures under monotonic loading conditions. The analysis of the obtained data revealed that the grading characteristics (D10, D50, Cu, ESR, and MGSR) have significant influence on the shear strength response. It was found that shear strength can be correlated to the grading characteristics for the sand-silt mixture. The effective size ratio (ESR) and mean grain size ratio (MGSR) appear as pertinent parameters to predict the shear strength response of the sand-silt mixtures for soil gradation under study.Keywords: grading characteristics, granular classes of sands, mechanical behavior, sand-silt, shear strength
Procedia PDF Downloads 3881896 Neural Reshaping: The Plasticity of Human Brain and Artificial Intelligence in the Learning Process
Authors: Seyed-Ali Sadegh-Zadeh, Mahboobe Bahrami, Sahar Ahmadi, Seyed-Yaser Mousavi, Hamed Atashbar, Amir M. Hajiyavand
Abstract:
This paper presents an investigation into the concept of neural reshaping, which is crucial for achieving strong artificial intelligence through the development of AI algorithms with very high plasticity. By examining the plasticity of both human and artificial neural networks, the study uncovers groundbreaking insights into how these systems adapt to new experiences and situations, ultimately highlighting the potential for creating advanced AI systems that closely mimic human intelligence. The uniqueness of this paper lies in its comprehensive analysis of the neural reshaping process in both human and artificial intelligence systems. This comparative approach enables a deeper understanding of the fundamental principles of neural plasticity, thus shedding light on the limitations and untapped potential of both human and AI learning capabilities. By emphasizing the importance of neural reshaping in the quest for strong AI, the study underscores the need for developing AI algorithms with exceptional adaptability and plasticity. The paper's findings have significant implications for the future of AI research and development. By identifying the core principles of neural reshaping, this research can guide the design of next-generation AI technologies that can enhance human and artificial intelligence alike. These advancements will be instrumental in creating a new era of AI systems with unparalleled capabilities, paving the way for improved decision-making, problem-solving, and overall cognitive performance. In conclusion, this paper makes a substantial contribution by investigating the concept of neural reshaping and its importance for achieving strong AI. Through its in-depth exploration of neural plasticity in both human and artificial neural networks, the study unveils vital insights that can inform the development of innovative AI technologies with high adaptability and potential for enhancing human and AI capabilities alike.Keywords: neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping
Procedia PDF Downloads 551895 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 1021894 Ecological Studies on Bulinus truncatus Snail the Intermediate Host of Schistosoma haematobium, in White Nile State, Sudan
Authors: Mohammed Hussein Eltoum Salih
Abstract:
This study was conducted in four villages, namely: Jadeed, Alandraba, Um Gaar, and EL Shetabe in the White Nile State Sudan, to determine the ecological factors; water vegetations, physical and chemical properties of the water in Snails habitat. Bulinus truncatus, which act as an intermediate host for S. haematobium, were collected from water bodies adjacent to study villages where the residents were suspected to swim, and humans get in contact with water for various purposes. Water samples from the stretches were collected and then measured for parameters that are indicative of the quality of water and sustaining the survival of snails and would confirm even further if the contact between humans and water had taken place. The parameters measured included water conductivity, pH, dissolved oxygen, calcium, and magnesium content. Also, a single water sample from each contact site was collected for microbiological tests. The result revealed that the B. truncatus showed that these animals were fewer and free of infection and their sites of the collection were dense with different plant species making them suitable to harbor snails. Moreover, the results of microbial tests showed that there was higher bacterial contamination. Also, physical and chemical analysis of water sample of contact sites revealed that there were significant differences (p < 0.05) in water pH, calcium, and magnesium content between sites of study villages, and these were discussed in relation to factors suitable for the intermediate hosts and thus for the transmission of the S. haematobium disease.Keywords: health, parasitology, Schistosoma, snails
Procedia PDF Downloads 1531893 Evaluation of Fracture Resistance and Moisture Damage of Hot Mix Asphalt Using Plastic Coated Aggregates
Authors: Malleshappa Japagal, Srinivas Chitragar
Abstract:
The use of waste plastic in pavement is becoming important alternative worldwide for disposal of plastic as well as to improve the stability of pavement and to meet out environmental issues. However, there are still concerns on fatigue and fracture resistance of Hot Mix Asphalt with the addition of plastic waste, (HMA-Plastic mixes) and moisture damage potential. The present study was undertaken to evaluate fracture resistance of HMA-Plastic mixes using semi-circular bending (SCB) test and moisture damage potential by Indirect Tensile strength (ITS) test using retained tensile strength (TSR). In this study, a dense graded asphalt mix with 19 mm nominal maximum aggregate size was designed in the laboratory using Marshall Mix design method. Aggregates were coated with different percentages of waste plastic (0%, 2%, 3% and 4%) by weight of aggregate and performance evaluation of fracture resistance and Moisture damage was carried out. The following parameters were estimated for the mixes: J-Integral or Jc, strain energy at failure, peak load at failure, and deformation at failure. It was found that the strain energy and peak load of all the mixes decrease with an increase in notch depth, indicating that increased percentage of plastic waste gave better fracture resistance. The moisture damage potential was evaluated by Tensile strength ratio (TSR). The experimental results shown increased TRS value up to 3% addition of waste plastic in HMA mix which gives better performance hence the use of waste plastic in road construction is favorable.Keywords: hot mix asphalt, semi circular bending, marshall mix design, tensile strength ratio
Procedia PDF Downloads 310