Search results for: response surface method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27074

Search results for: response surface method

9644 Nanotechnology-Based Treatment of Klebsiella pneumoniae Infections

Authors: Lucian Mocan, Teodora Mocan, Matea Cristian, Cornel Iancu

Abstract:

We present method of nanoparticle enhanced laser thermal ablation of Klebsiella pneumoniae infections, using gold nanoparticles combined with a specific growth factor and demonstrate its selective therapeutic efficacy. Ab (antibody solution) bound to GNPs (gold nanoparticles) was administered in vitro and determined the specific delivery of the nano-bioconjugate into the microorganism. The extent of necrosis was considerable following laser therapy, and at the same time, normal cells were not seriously affected. The selective photothermal ablation of the infected tissue was obtained after the selective accumulation of Ab bound to GNPs into bacteria following perfusion. These results may represent a major step in antibiotherapy treatment using nanolocalized thermal ablation by laser heating.

Keywords: gold nanoparticles, Klebsiella pneumoniae, nanoparticle functionalization, laser irradiation, antibody

Procedia PDF Downloads 418
9643 Ethno-Medical Potentials of Tacazzea apiculata Oliv. (Periplocaceae)

Authors: Abubakar Ahmed, Zainab Mohammed, Hadiza D. Nuhu, Hamisu Ibrahim

Abstract:

Introduction: The plant Tacazzea apiculata Oliv (Periplocaceae) is widely distributed in tropical West Africa. It is claimed to have multiple uses in traditional medicine among which are its use to treat hemorrhoids, inflammations and cancers. Methods: Ethno-botanical survey through interview and using show-and-tell method of data collection were conducted among Hausa and Fulani tribes of northern Nigeria with the view to document useful information on the numerous claims by the local people on the plant. Results: The results revealed that the plant T. apiculata has relative popularity among the herbalist (38.2 %), nomads (14.8 %) and fishermen (16.0%). The most important uses of the plant in traditional medicine are inflammation (Fedelity level: 25.7 %) and Haemorrhoids (Fedelity level: 17.1 %) Conclusion: These results suggest the relevance of T. apiculata in traditional medicine and as a good candidate for drug Development.

Keywords: ethno-botany, periplocaceae, Tacazzea apiculata, traditional medicine

Procedia PDF Downloads 508
9642 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 230
9641 A Cloud-Based Federated Identity Management in Europe

Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas

Abstract:

Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).

Keywords: cybersecurity, identity federation, trust, user authentication

Procedia PDF Downloads 162
9640 Effect of Fuel Type on Design Parameters and Atomization Process for Pressure Swirl Atomizer and Dual Orifice Atomizer for High Bypass Turbofan Engine

Authors: Mohamed K. Khalil, Mohamed S. Ragab

Abstract:

Atomizers are used in many engineering applications including diesel engines, petrol engines and spray combustion in furnaces as well as gas turbine engines. These atomizers are used to increase the specific surface area of the fuel, which achieve a high rate of fuel mixing and evaporation. In all combustion systems reduction in mean drop size is a challenge which has many advantages since it leads to rapid and easier ignition, higher volumetric heat release rate, wider burning range and lower exhaust concentrations of the pollutant emissions. Pressure atomizers have a different configuration for design such as swirl atomizer (simplex), dual orifice, spill return, plain orifice, duplex and fan spray. Simplex pressure atomizers are the most common type of all. Among all types of atomizers, pressure swirl types resemble a special category since they differ in quality of atomization, the reliability of operation, simplicity of construction and low expenditure of energy. But, the disadvantages of these atomizers are that they require very high injection pressure and have low discharge coefficient owing to the fact that the air core covers the majority of the atomizer orifice. To overcome these problems, dual orifice atomizer was designed. This paper proposes a detailed mathematical model design procedure for both pressure swirl atomizer (Simplex) and dual orifice atomizer, examines the effects of varying fuel type and makes a clear comparison between the two types. Using five types of fuel (JP-5, JA1, JP-4, Diesel and Bio-Diesel) as a case study, reveal the effect of changing fuel type and its properties on atomizers design and spray characteristics. Which effect on combustion process parameters; Sauter Mean Diameter (SMD), spray cone angle and sheet thickness with varying the discharge coefficient from 0.27 to 0.35 during takeoff for high bypass turbofan engines. The spray atomizer performance of the pressure swirl fuel injector was compared to the dual orifice fuel injector at the same differential pressure and discharge coefficient using Excel. The results are analyzed and handled to form the final reliability results for fuel injectors in high bypass turbofan engines. The results show that the Sauter Mean Diameter (SMD) in dual orifice atomizer is larger than Sauter Mean Diameter (SMD) in pressure swirl atomizer, the film thickness (h) in dual orifice atomizer is less than the film thickness (h) in pressure swirl atomizer. The Spray Cone Angle (α) in pressure swirl atomizer is larger than Spray Cone Angle (α) in dual orifice atomizer.

Keywords: gas turbine engines, atomization process, Sauter mean diameter, JP-5

Procedia PDF Downloads 161
9639 Synthesis of Dispersion-Compensating Triangular Lattice Index-Guiding Photonic Crystal Fibers Using the Directed Tabu Search Method

Authors: F. Karim

Abstract:

In this paper, triangular lattice index-guiding photonic crystal fibers (PCFs) are synthesized to compensate the chromatic dispersion of a single mode fiber (SMF-28) for an 80 km optical link operating at 1.55 µm, by using the directed tabu search algorithm. Hole-to-hole distance, circular air-hole diameter, solid-core diameter, ring number and PCF length parameters are optimized for this purpose. Three Synthesized PCFs with different physical parameters are compared in terms of their objective functions values, residual dispersions and compensation ratios.

Keywords: triangular lattice index-guiding photonic crystal fiber, dispersion compensation, directed tabu search, synthesis

Procedia PDF Downloads 424
9638 Marketing in Post-Pandemic Environment

Authors: Mohammad Mehdizadeh

Abstract:

COVID-19 forced marketers to change their marketing strategies, focusing less on reactive approaches and more on proactive approaches, primarily social media. The next few years will be dominated by employee engagement and customer experience, leading to businesses focusing more on "long-term customer relationships." A large number of marketing strategies need to be employed in an ever-evolving online environment, which is both filled with opportunities and dangers, as well as being an intimidating platform to use, incorporating new and exciting opportunities for businesses and organizations as it constantly evolves. In this article, we examine the effect of social networks on marketing in post-pandemic environments. A descriptive survey is used as the research method. The results show that social networks have a positive and significant impact on marketing in a post-pandemic environment. Among the social networks studied, Instagram, Facebook, and Twitter have the most positive effect on marketing advancement.

Keywords: COVID-19, customers, marketing, post-pandemic

Procedia PDF Downloads 82
9637 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector

Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud

Abstract:

The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.

Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity

Procedia PDF Downloads 328
9636 Phage Display-Derived Vaccine Candidates for Control of Bovine Anaplasmosis

Authors: Itzel Amaro-Estrada, Eduardo Vergara-Rivera, Virginia Juarez-Flores, Mayra Cobaxin-Cardenas, Rosa Estela Quiroz, Jesus F. Preciado, Sergio Rodriguez-Camarillo

Abstract:

Bovine anaplasmosis is an infectious, tick-borne disease caused mainly by Anaplasma marginale; typical signs include anemia, fever, abortion, weight loss, decreased milk production, jaundice, and potentially death. Sick bovine can recover when antibiotics are administered; however, it usually remains as carrier for life, being a risk of infection for susceptible cattle. Anaplasma marginale is an obligate intracellular Gram-negative bacterium with genetic composition highly diverse among geographical isolates. There are currently no vaccines fully effective against bovine anaplasmosis; therefore, the economic losses due to disease are present. Vaccine formulation became a hard task for several pathogens as Anaplasma marginale, but peptide-based vaccines are an interesting proposal way to induce specific responses. Phage-displayed peptide libraries have been proved one of the most powerful technologies for identifying specific ligands. Screening of these peptides libraries is also a tool for studying interactions between proteins or peptides. Thus, it has allowed the identification of ligands recognized by polyclonal antiserums, and it has been successful for the identification of relevant epitopes in chronic diseases and toxicological conditions. Protective immune response to bovine anaplasmosis includes high levels of immunoglobulins subclass G2 (IgG2) but not subclass IgG1. Therefore, IgG2 from the serum of protected bovine can be useful to identify ligands, which can be part of an immunogen for cattle. In this work, phage display random peptide library Ph.D. ™ -12 was incubating with IgG2 or blood sera of immunized bovines against A. marginale as targets. After three rounds of biopanning, several candidates were selected for additional analysis. Subsequently, their reactivity with sera immunized against A. marginale, as well as with positive and negative sera to A. marginale was evaluated by immunoassays. A collection of recognized peptides tested by ELISA was generated. More than three hundred phage-peptides were separately evaluated against molecules which were used during panning. At least ten different peptides sequences were determined from their nucleotide composition. In this approach, three phage-peptides were selected by their binding and affinity properties. In the case of the development of vaccines or diagnostic reagents, it is important to evaluate the immunogenic and antigenic properties of the peptides. Immunogenic in vitro and in vivo behavior of peptides will be assayed as synthetic and as phage-peptide for to determinate their vaccine potential. Acknowledgment: This work was supported by grant SEP-CONACYT 252577 given to I. Amaro-Estrada.

Keywords: bovine anaplasmosis, peptides, phage display, veterinary vaccines

Procedia PDF Downloads 136
9635 Analysis of Expression Data Using Unsupervised Techniques

Authors: M. A. I Perera, C. R. Wijesinghe, A. R. Weerasinghe

Abstract:

his study was conducted to review and identify the unsupervised techniques that can be employed to analyze gene expression data in order to identify better subtypes of tumors. Identifying subtypes of cancer help in improving the efficacy and reducing the toxicity of the treatments by identifying clues to find target therapeutics. Process of gene expression data analysis described under three steps as preprocessing, clustering, and cluster validation. Feature selection is important since the genomic data are high dimensional with a large number of features compared to samples. Hierarchical clustering and K Means are often used in the analysis of gene expression data. There are several cluster validation techniques used in validating the clusters. Heatmaps are an effective external validation method that allows comparing the identified classes with clinical variables and visual analysis of the classes.

Keywords: cancer subtypes, gene expression data analysis, clustering, cluster validation

Procedia PDF Downloads 145
9634 Particle Filter Implementation of a Non-Linear Dynamic Fall Model

Authors: T. Kobayashi, K. Shiba, T. Kaburagi, Y. Kurihara

Abstract:

For the elderly living alone, falls can be a serious problem encountered in daily life. Some elderly people are unable to stand up without the assistance of a caregiver. They may become unconscious after a fall, which can lead to serious aftereffects such as hypothermia, dehydration, and sometimes even death. We treat the subject as an inverted pendulum and model its angle from the equilibrium position and its angular velocity. As the model is non-linear, we implement the filtering method with a particle filter which can estimate true states of the non-linear model. In order to evaluate the accuracy of the particle filter estimation results, we calculate the root mean square error (RMSE) between the estimated angle/angular velocity and the true values generated by the simulation. The experimental results give the highest accuracy RMSE of 0.0141 rad and 0.1311 rad/s for the angle and angular velocity, respectively.

Keywords: fall, microwave Doppler sensor, non-linear dynamics model, particle filter

Procedia PDF Downloads 207
9633 Microwave Assisted Growth of Varied Phases and Morphologies of Vanadium Oxides Nanostructures: Structural and Optoelectronic Properties

Authors: Issam Derkaoui, Mohammed Khenfouch, Bakang M. Mothudi, Malik Maaza, Izeddine Zorkani, Anouar Jorio

Abstract:

Transition metal oxides nanoparticles with different morphologies have attracted a lot of attention recently owning to their distinctive geometries, and demonstrated promising electrical properties for various applications. In this paper, we discuss the time and annealing effects on the structural and electrical properties of vanadium oxides nanoparticles (VO-NPs) prepared by microwave method. In this sense, transmission electron microscopy (TEM), X-ray diffraction (XRD), Raman Spectroscopy, Ultraviolet-visible absorbance spectra (Uv-Vis) and electrical conductivity were investigated. Hence, the annealing state and the time are two crucial parameters for the improvement of the optoelectronic properties. The use of these nanostructures is promising way for the development of technological applications especially for energy storage devices.

Keywords: Vanadium oxide, Microwave, Electrical conductivity, Optoelectronic properties

Procedia PDF Downloads 189
9632 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 217
9631 Development of Construction Cost Optimization System Using Genetic Algorithm Method

Authors: Hyeon-Seung Kim, Young-Hwan Kim, Sang-Mi Park, Min-Seo Kim, Jong-Myeung Shin, Leen-Seok Kang

Abstract:

The project budget at the planned stage might be changed by the insufficient government budget or the design change. There are many cases more especially in the case of a project performed for a long period of time. If the actual construction budget is insufficient comparing with the planned budget, the construction schedule should also be changed to match the changed budget. In that case, most project managers change the planned construction schedule by a heuristic approach without a reasonable consideration on the work priority. This study suggests an optimized methodology to modify the construction schedule according to the changed budget. The genetic algorithm was used to optimize the modified construction schedule within the changed budget. And a simulation system of construction cost histogram in accordance with the construction schedule was developed in the BIM (Building Information Modeling) environment.

Keywords: 5D, BIM, GA, cost optimization

Procedia PDF Downloads 584
9630 Using the Theory of Reasoned Action and Parental Mediation Theory to Examine Cyberbullying Perpetration among Children and Adolescents

Authors: Shirley S. Ho

Abstract:

The advancement and development of social media have inadvertently brought about a new form of bullying – cyberbullying – that transcends across physical boundaries of space. Although extensive research has been conducted in the field of cyberbullying, most of these studies have taken an overwhelmingly empirical angle. Theories guiding cyberbullying research are few. Furthermore, very few studies have explored the association between parental mediation and cyberbullying, with majority of existing studies focusing on cyberbullying victimization rather than perpetration. Therefore, this present study investigates cyberbullying perpetration from a theoretical angle, with a focus on the Theory of Reasoned Action and the Parental Mediation Theory. More specifically, this study examines the direct effects of attitude, subjective norms, descriptive norms, injunctive norms and active mediation and restrictive mediation on cyberbullying perpetration on social media among children and adolescents in Singapore. Furthermore, the moderating role of age on the relationship between parental mediation and cyberbullying perpetration on social media are examined. A self-administered paper-and-pencil nationally-representative survey was conducted. Multi-stage cluster random sampling was used to ensure that schools from all the four (North, South, East, and West) regions of Singapore were equally represented in the sample used for the survey. In all 607 upper primary school children (i.e., Primary 4 to 6 students) and 782 secondary school adolescents participated in our survey. The total average response rates were 69.6% for student participation. An ordinary least squares hierarchical regression analysis was conducted to test the hypotheses and research questions. The results revealed that attitude and subjective norms were positively associated with cyberbullying perpetration on social media. Descriptive norms and injunctive norms were not found to be significantly associated with cyberbullying perpetration. The results also showed that both parental mediation strategies were negatively associated with cyberbullying perpetration on social media. Age was a significant moderator of both parental mediation strategies and cyberbullying perpetration. The negative relationship between active mediation and cyberbullying perpetration was found to be greater in the case of children than adolescents. Children who received high restrictive parental mediation were less likely to perform cyberbullying behaviors, while adolescents who received high restrictive parental mediation were more likely to be engaged in cyberbullying perpetration. The study reveals that parents should apply active mediation and restrictive mediation in different ways for children and adolescents when trying to prevent cyberbullying perpetration. The effectiveness of active parental mediation for reducing cyberbullying perpetration was more in the case of children than for adolescents. Younger children were found to be more likely to respond more positively toward restrictive parental mediation strategies, but in the case of adolescents, overly restrictive control was found to increase cyberbullying perpetration. Adolescents exhibited less cyberbullying behaviors when under low restrictive strategies. Findings highlight that the Theory of Reasoned Action and Parental Mediation Theory are promising frameworks to apply in the examination of cyberbullying perpetration. The findings that different parental mediation strategies had differing effectiveness, based on the children’s age, bring about several practical implications that may benefit educators and parents when addressing their children’s online risk.

Keywords: cyberbullying perpetration, theory of reasoned action, parental mediation, social media, Singapore

Procedia PDF Downloads 247
9629 [Keynote Talk]: Evidence Fusion in Decision Making

Authors: Mohammad Abdullah-Al-Wadud

Abstract:

In the current era of automation and artificial intelligence, different systems have been increasingly keeping on depending on decision-making capabilities of machines. Such systems/applications may range from simple classifiers to sophisticated surveillance systems based on traditional sensors and related equipment which are becoming more common in the internet of things (IoT) paradigm. However, the available data for such problems are usually imprecise and incomplete, which leads to uncertainty in decisions made based on traditional probability-based classifiers. This requires a robust fusion framework to combine the available information sources with some degree of certainty. The theory of evidence can provide with such a method for combining evidence from different (may be unreliable) sources/observers. This talk will address the employment of the Dempster-Shafer Theory of evidence in some practical applications.

Keywords: decision making, dempster-shafer theory, evidence fusion, incomplete data, uncertainty

Procedia PDF Downloads 419
9628 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog

Authors: Ana Flavia Belchior De Andrade

Abstract:

Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.

Keywords: backlog, forensic laboratory, quality management, accreditation

Procedia PDF Downloads 119
9627 An Optimization Model for Maximum Clique Problem Based on Semidefinite Programming

Authors: Derkaoui Orkia, Lehireche Ahmed

Abstract:

The topic of this article is to exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for solving NP-hard problems. This approach provides tight relaxations of combinatorial and quadratic problems. In this work, we solve the maximum clique problem using this relaxation. The clique problem is the computational problem of finding cliques in a graph. It is widely acknowledged for its many applications in real-world problems. The numerical results show that it is possible to find a maximum clique in polynomial time, using an algorithm based on semidefinite programming. We implement a primal-dual interior points algorithm to solve this problem based on semidefinite programming. The semidefinite relaxation of this problem can be solved in polynomial time.

Keywords: semidefinite programming, maximum clique problem, primal-dual interior point method, relaxation

Procedia PDF Downloads 217
9626 Profit Comparative of Fisheries in East Aceh Regency Aceh Province

Authors: Mawardati Mawardati

Abstract:

This research was carried out on the traditional milkfish and shrimp culture cultivation from March to May 2018 in East Aceh District. This study aims to to analyze the differences between traditional milkfish cultivation and shrimp farming in East Aceh District, Aceh Province. The analytical method used is acquisition analysis and Independent Sample T test analysis. The results showed a significant difference between milkfish farming and shrimp farming in East Aceh District, Aceh Province. Based on the results of the analysis, the average profit from shrimp farming is higher than that of milkfish farming. This demand exceeds market demand for exports. Thus the price of shrimp is still far higher than the price of milk fish.

Keywords: comparative, profit, shrimp, milkfish

Procedia PDF Downloads 150
9625 Gendered Labelling and Its Effects on Vhavenda Women

Authors: Matodzi Rapalalani

Abstract:

In context with Spencer's (2018) classic labelling theory, labels influence the perceptions of both the individual and other members of society. That is, once labelled, the individual act in ways that confirm the stereotypes attached to the label. This study, therefore, investigates the understanding of gendered labelling and its effects on Vhavenda women. Gender socialization and patriarchy have been viewed as the core causes of the problem. The literature presented the development of gendered labelling, forms of it, and other aspects. A qualitative method of data collection was used in this study, and semi-structural interviews were conducted. A total of 6 participants were used as it is easy to deal with a small sample. Thematic analysis was used as the data was interpreted and analyzed. Ethical issues such as confidentiality, informed consent, and voluntary participation were considered. Through the analysis and data interpretation, causes such as lack of Christian values, insecurities, and lust were mentioned as well as some of the effects such as frustrations, increased divorce, and low self-esteem.

Keywords: gender, naming, Venda, women, African culture

Procedia PDF Downloads 88
9624 Explaining E-Learning Systems Usage in Higher Education Institutions: UTAUT Model

Authors: Muneer Abbad

Abstract:

This research explains the e-learning usage in a university in Jordan. Unified theory of acceptance and use of technology (UTAUT) model has been used as a base model to explain the usage. UTAUT is a model of individual acceptance that is compiled mainly from different models of technology acceptance. This research is the initial part from full explanations of the users' acceptance model that use Structural Equation Modelling (SEM) method to explain the users' acceptance of the e-learning systems based on UTAUT model. In this part data has been collected and prepared for further analysis. The main factors of UTAUT model has been tested as different factors using exploratory factor analysis (EFA). The second phase will be confirmatory factor analysis (CFA) and SEM to explain the users' acceptance of e-learning systems.

Keywords: e-learning, moodle, adoption, Unified Theory of Acceptance and Use of Technology (UTAUT)

Procedia PDF Downloads 400
9623 Corrosion Analysis of Brazed Copper-Based Conducts in Particle Accelerator Water Cooling Circuits

Authors: A. T. Perez Fontenla, S. Sgobba, A. Bartkowska, Y. Askar, M. Dalemir Celuch, A. Newborough, M. Karppinen, H. Haalien, S. Deleval, S. Larcher, C. Charvet, L. Bruno, R. Trant

Abstract:

The present study investigates the corrosion behavior of copper (Cu) based conducts predominantly brazed with Sil-Fos (self-fluxing copper-based filler with silver and phosphorus) within various cooling circuits of demineralized water across different particle accelerator components at CERN. The study covers a range of sample service time, from a few months to fifty years, and includes various accelerator components such as quadrupoles, dipoles, and bending magnets. The investigation comprises the established sample extraction procedure, examination methodology including non-destructive testing, evaluation of the corrosion phenomena, and identification of commonalities across the studied components as well as analysis of the environmental influence. The systematic analysis included computed microtomography (CT) of the joints that revealed distributed defects across all brazing interfaces. Some defects appeared to result from areas not wetted by the filler during the brazing operation, displaying round shapes, while others exhibited irregular contours and radial alignment, indicative of a network or interconnection. The subsequent dry cutting performed facilitated access to the conduct's inner surface and the brazed joints for further inspection through light and electron microscopy (SEM) and chemical analysis via Energy Dispersive X-ray spectroscopy (EDS). Brazing analysis away from affected areas identified the expected phases for a Sil-Fos alloy. In contrast, the affected locations displayed micrometric cavities propagating into the material, along with selective corrosion of the bulk Cu initiated at the conductor-braze interface. Corrosion product analysis highlighted the consistent presence of sulfur (up to 6 % in weight), whose origin and role in the corrosion initiation and extension is being further investigated. The importance of this study is paramount as it plays a crucial role in comprehending the underlying factors contributing to recently identified water leaks and evaluating the extent of the issue. Its primary objective is to provide essential insights for the repair of impacted brazed joints when accessibility permits. Moreover, the study seeks to contribute to the improvement of design and manufacturing practices for future components, ultimately enhancing the overall reliability and performance of magnet systems within CERN accelerator facilities.

Keywords: accelerator facilities, brazed copper conducts, demineralized water, magnets

Procedia PDF Downloads 44
9622 Construction Sustainability Improvement through Using Recycled Aggregates in Concrete Production

Authors: Zhiqiang Zhu, Khalegh Barati, Xuesong Shen

Abstract:

Due to the energy consumption caused by the construction industry, the public is paying more and more attention to the sustainability of the buildings. With the advancement of research on recycled aggregates, it has become possible to replace natural aggregates with recycled aggregates and to achieve a reduction in energy consumption of materials during construction. The purpose of this paper is to quantitatively compare the emergy consumption of natural aggregate concrete (NAC) and recycled aggregate concrete (RAC). To do so, the emergy analysis method is adopted. Using this technique, it can effectively analyze different forms of energy and substance. The main analysis object is the direct and indirect emergy consumption of the stages in concrete production. Therefore, for indirect energy, consumption of production machinery and transportation vehicle also need to be considered. Finally, the emergy values required to produce the two concrete types are compared to analyze whether the RAC can reduce emergy consumption.

Keywords: sustainable construction, NAC, RAC, emergy, concrete

Procedia PDF Downloads 145
9621 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 296
9620 Experimental Procedure of Identifying Ground Type by Downhole Test: A Case Study

Authors: Seyed Abolhassan Naeini, Maedeh Akhavan Tavakkoli

Abstract:

Evaluating the shear wave velocity (V_s) and primary wave velocity (Vₚ) is necessary to identify the ground type of the site. Identifying the soil type based on different codes can affect the dynamic analysis of geotechnical properties. This study aims to separate the underground layers at the project site based on the shear wave and primary wave velocity (Sₚ) in different depths and determine dynamic elastic modulus based on the shear wave velocity. Bandar Anzali is located in a tectonically very active area. Several active faults surround the study site. In this case, a field investigation of downhole testing is conducted as a geophysics method to identify the ground type.

Keywords: downhole, geophysics, shear wave velocity, case-study

Procedia PDF Downloads 133
9619 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 49
9618 Angular Correlation and Independent Particle Model in Two-Electron Atomic Systems

Authors: Tokuei Sako

Abstract:

The ground and low-lying singly-excited states of He and He-like atomic ions have been studied by the Full Configuration Interaction (FCI) method focusing on the angular correlation between two electrons in the studied systems. The two-electron angle density distribution obtained by integrating the square-modulus of the FCI wave function over the coordinates other than the interelectronic angle shows a distinct trend between the singlet-triplet pair of states for different values of the nuclear charge Zn. Further, both of these singlet and triplet distributions tend to show an increasingly stronger dependence on the interelectronic angle as Zn increases, in contrast to the well-known fact that the correlation energy approaches towards zero for increasing Zn. This controversial observation has been rationalized on the basis of the recently introduced concept of so-called conjugate Fermi holes.

Keywords: He-like systems, angular correlation, configuration interaction wave function, conjugate Fermi hole

Procedia PDF Downloads 408
9617 Exposure to Radon on Air in Tourist Caves in Bulgaria

Authors: Bistra Kunovska, Kremena Ivanova, Jana Djounova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

The carcinogenic effects of radon as a radioactive noble gas have been studied and show a strong correlation between radon exposure and lung cancer occurrence, even in the case of low radon levels. The major part of the natural radiation dose in humans is received by inhaling radon and its progenies, which originates from the decay chain of U-238. Indoor radon poses a substantial threat to human health when build-up occurs in confined spaces such as homes, mines and caves and the risk increases with the duration of radon exposure and is proportional to both the radon concentration and the time of exposure. Tourist caves are a case of special environmental conditions that may be affected by high radon concentration. Tourist caves are a recognized danger in terms of radon exposure to cave workers (guides, employees working in shops built above the cave entrances, etc.), but due to the sensitive nature of the cave environment, high concentrations cannot be easily removed. Forced ventilation of the air in the caves is considered unthinkable due to the possible harmful effects on the microclimate, flora and fauna. The risks to human health posed by exposure to elevated radon levels in caves are not well documented. Various studies around the world often detail very high concentrations of radon in caves and exposure of employees but without a follow-up assessment of the overall impact on human health. This study was developed in the implementation of a national project to assess the potential health effects caused by exposure to elevated levels of radon in buildings with public access under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018. The purpose of the work is to assess the radon level in Bulgarian caves and the exposure of the visitors and workers. The number of caves (sampling size) was calculated for simple random selection from total available caves 65 (sampling population) are 13 caves with confidence level 95 % and confidence interval (margin of error) approximately 25 %. A measurement of the radon concentration in air at specific locations in caves was done by using CR-39 type nuclear track-etch detectors that were placed by the participants in the research team. Despite the fact that all of the caves were formed in karst rocks, the radon levels were rather different from each other (97–7575 Bq/m3). An assessment of the influence of the orientation of the caves in the earth's surface (horizontal, inclined, vertical) on the radon concentration was performed. Evaluation of health hazards and radon risk exposure causing by inhaling the radon and its daughter products in each surveyed caves was done. Reducing the time spent in the cave has been recommended in order to decrease the exposure of workers.

Keywords: tourist caves, radon concentration, exposure, Bulgaria

Procedia PDF Downloads 183
9616 Effect of Fault Depth on Near-Fault Peak Ground Velocity

Authors: Yanyan Yu, Haiping Ding, Pengjun Chen, Yiou Sun

Abstract:

Fault depth is an important parameter to be determined in ground motion simulation, and peak ground velocity (PGV) demonstrates good application prospect. Using numerical simulation method, the variations of distribution and peak value of near-fault PGV with different fault depth were studied in detail, and the reason of some phenomena were discussed. The simulation results show that the distribution characteristics of PGV of fault-parallel (FP) component and fault-normal (FN) component are distinctly different; the value of PGV FN component is much larger than that of FP component. With the increase of fault depth, the distribution region of the FN component strong PGV moves forward along the rupture direction, while the strong PGV zone of FP component becomes gradually far away from the fault trace along the direction perpendicular to the strike. However, no matter FN component or FP component, the strong PGV distribution area and its value are both quickly reduced with increased fault depth. The results above suggest that the fault depth have significant effect on both FN component and FP component of near-fault PGV.

Keywords: fault depth, near-fault, PGV, numerical simulation

Procedia PDF Downloads 341
9615 The Differences in Skill Performance Between Online and Conventional Learning Among Nursing Students

Authors: Nurul Nadrah

Abstract:

As a result of the COVID-19 pandemic, a movement control order was implemented, leading to the adoption of online learning as a substitute for conventional classroom instruction. Thus, this study aims to determine the differences in skill performance between online learning and conventional methods among nursing students. We employed a quasi-experimental design with purposive sampling, involving a total of 59 nursing students, and used online learning as the intervention. As a result, the study found there was a significant difference in student skill performance between online learning and conventional methods. As a conclusion, in times of hardship, it is necessary to implement alternative pedagogical approaches, especially in critical fields like nursing, to ensure the uninterrupted progression of educational programs. This study suggests that online learning can be effectively employed as a means of imparting knowledge to nursing students during their training.

Keywords: nursing education, online learning, skill performance, conventional learning method

Procedia PDF Downloads 36