Search results for: intelligent computational techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8768

Search results for: intelligent computational techniques

308 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation

Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis

Abstract:

The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.

Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement

Procedia PDF Downloads 404
307 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique

Authors: Harpal Singh, Sakshi Batra

Abstract:

The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.

Keywords: discrete wavelet transform, robustness, video watermarking, watermark

Procedia PDF Downloads 209
306 Selection and Preparation of High Performance, Natural and Cost-Effective Hydrogel as a Bio-Ink for 3D Bio-Printing and Organ on Chip Applications

Authors: Rawan Ashraf, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab

Abstract:

Background: Three-dimensional (3D) bio-printing has become a versatile and powerful method for generating a variety of biological constructs, including bone or extracellular matrix scaffolds endo- or epithelial, muscle tissue, as well as organoids. Aim of the study: Fabricate a low cost DIY 3D bio-printer to produce 3D bio-printed products such as anti-microbial packaging or multi-organs on chips. We demonstrate the alignment between two types of 3D printer technology (3D Bio-printer and DLP) on Multi-organ-on-a-chip (multi-OoC) devices fabrication. Methods: First, Design and Fabrication of the Syringe Unit for Modification of an Off-the-Shelf 3D Printer, then Preparation of Hydrogel based on natural polymers Sodium Alginate and Gelatin, followed by acquisition of the cell suspension, then modeling the desired 3D structure. Preparation for 3D printing, then Cell-free and cell-laden hydrogels went through the printing process at room temperature under sterile conditions and finally post printing curing process and studying the printed structure regards physical and chemical characteristics. The hard scaffold of the Organ on chip devices was designed and fabricated using the DLP-3D printer, following similar approaches as the Microfluidics system fabrication. Results: The fabricated Bio-Ink was based onHydrogel polymer mix of sodium alginate and gelatin 15% to 0.5%, respectively. Later the 3D printing process was conducted using a higher percentage of alginate-based hydrogels because of it viscosity and the controllable crosslinking, unlike the thermal crosslinking of Gelatin. The hydrogels were colored to simulate the representation of two types of cells. The adaption of the hard scaffold, whether for the Microfluidics system or the hard-tissues, has been acquired by the DLP 3D printers with fabricated natural bioactive essential oils that contain antimicrobial activity, followed by printing in Situ three complex layers of soft-hydrogel as a cell-free Bio-Ink to simulate the real-life tissue engineering process. The final product was a proof of concept for a rapid 3D cell culturing approaches that uses an engineered hard scaffold along with soft-tissues, thus, several applications were offered as products of the current prototype, including the Organ-On-Chip as a successful integration between DLP and 3D bioprinter. Conclusion: Multiple designs for the organ-on-a-chip (multi-OoC) devices have been acquired in our study with main focus on the low cost fabrication of such technology and the potential to revolutionize human health research and development. We describe circumstances in which multi-organ models are useful after briefly examining the requirement for full multi-organ models with a systemic component. Following that, we took a look at the current multi-OoC platforms, such as integrated body-on-a-chip devices and modular techniques that use linked organ-specific modules.

Keywords: 3d bio-printer, hydrogel, multi-organ on chip, bio-inks

Procedia PDF Downloads 139
305 LaeA/1-Velvet Interplay in Aspergillus and Trichoderma: Regulation of Secondary Metabolites and Cellulases

Authors: Razieh Karimi Aghcheh, Christian Kubicek, Joseph Strauss, Gerhard Braus

Abstract:

Filamentous fungi are of considerable economic and social significance for human health, nutrition and in white biotechnology. These organisms are dominant producers of a range of primary metabolites such as citric acid, microbial lipids (biodiesel) and higher unsaturated fatty acids (HUFAs). In particular, they produce also important but structurally complex secondary metabolites with enormous therapeutic applications in pharmaceutical industry, for example: cephalosporin, penicillin, taxol, zeranol and ergot alkaloids. Several fungal secondary metabolites, which are significantly relevant to human health do not only include antibiotics, but also e.g. lovastatin, a well-known antihypercholesterolemic agent produced by Aspergillus. terreus, or aflatoxin, a carcinogen produced by A. flavus. In addition to their roles for human health and agriculture, some fungi are industrially and commercially important: Species of the ascomycete genus Hypocrea spp. (teleomorph of Trichoderma) have been demonstrated as efficient producer of highly active cellulolytic enzymes. This trait makes them effective in disrupting and depolymerization of lignocellulosic materials and thus applicable tools in number of biotechnological areas as diverse as clothes-washing detergent, animal feed, and pulp and fuel productions. Fungal LaeA/LAE1 (Loss of aflR Expression A) homologs their gene products act at the interphase between secondary metabolisms, cellulase production and development. Lack of the corresponding genes results in significant physiological changes including loss of secondary metabolite and lignocellulose degrading enzymes production. At the molecular level, the encoded proteins are presumably methyltransferases or demethylases which act directly or indirectly at heterochromatin and interact with velvet domain proteins. Velvet proteins bind to DNA and affect expression of secondary metabolites (SMs) genes and cellulases. The dynamic interplay between LaeA/LAE1, velvet proteins and additional interaction partners is the key for an understanding of the coordination of metabolic and morphological functions of fungi and is required for a biotechnological control of the formation of desired bioactive products. Aspergilli and Trichoderma represent different biotechnologically significant species with significant differences in the LaeA/LAE1-Velvet protein machinery and their target proteins. We, therefore, performed a comparative study of the interaction partners of this machinery and the dynamics of the various protein-protein interactions using our robust proteomic and mass spectrometry techniques. This enhances our knowledge about the fungal coordination of secondary metabolism, cellulase production and development and thereby will certainly improve recombinant fungal strain construction for the production of industrial secondary metabolite or lignocellulose hydrolytic enzymes.

Keywords: cellulases, LaeA/1, proteomics, secondary metabolites

Procedia PDF Downloads 249
304 Attitudes of Gratitude: An Analysis of 30 Cancer Patient Narratives Published by Leading U.S. Cancer Care Centers

Authors: Maria L. McLeod

Abstract:

This study examines the ways in which cancer patient narratives are portrayed and framed on the websites of three leading U.S. cancer care centers –The University of Texas MD Anderson Cancer Center in Houston, Memorial Sloan Kettering Cancer Center in New York, and Seattle Cancer Care Alliance. Thirty patient stories, ten from each cancer center website blog, were analyzed using qualitative and quantitative textual analysis of unstructured data, documenting repeated use of specific metaphors and tropes while charting common themes and other elements of story structure and content. Patient narratives were coded using grounded theory as the basis for conducting emergent qualitative research. As part of a systematic, inductive approach to collecting and analyzing data, recurrent and unique themes were examined and compared in terms of positive and negative framing, patient agency, and institutional praise. All three of these cancer care centers are teaching hospitals with university affiliations, that emphasizes an evidence-based scientific approach to treatment that utilizes the latest research and cutting-edge techniques and technology. Thus, the use of anecdotal evidence presented in patient narratives could be perceived as being in conflict with this evidence-based model, as the patient stories are not an accurate representation of scientific outcomes related to developing cancer, cancer reoccurrence, or cancer outcomes. The representative patient narratives tend to exclude or downplay adverse responses to treatment, survival rates, integrative and/or complementary cancer treatments, cancer prevention and causes, and barriers to treatment, such as the limitation of insurance plans, costs of treatment, and/or other issues related to access, potentially contributing to false narratives and inaccurate notions of cancer prevention, cancer care treatment and the potential for a cure. Both quantitative and qualitative findings demonstrate that cancer patient stories featured on the blogsites of the nation’s top cancer care centers deemphasize patient agency and, instead, emphasize deference and gratitude toward the institutions where the featured patients received treatment. Along these lines, language choices reflect positive framing of the cancer experience. Accompanying portrait photos of healthy appearing subjects as well as positive-framed headlines, subheads, and pull quotes function similarly, reflecting hopeful, transformative experiences and outcomes over hardship and suffering. Although patient narratives include real, factual scientific details and descriptions of actual events, the stories lack references to more negative realities of cancer diagnosis and treatment. Instead, they emphasize the triumph of survival by which the cancer care center, in the savior/hero role, enables the patient’s success, represented as a cathartic medical journey.

Keywords: cancer framing, cancer stories, medical gaze, patient narratives

Procedia PDF Downloads 130
303 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 317
302 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 122
301 Accelerating Personalization Using Digital Tools to Drive Circular Fashion

Authors: Shamini Dhana, G. Subrahmanya VRK Rao

Abstract:

The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.

Keywords: circular fashion, deep learning, digital technology platform, personalization

Procedia PDF Downloads 34
300 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations

Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai

Abstract:

Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.

Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile

Procedia PDF Downloads 121
299 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins Performance: North Gaza Emergency Sewage Treatment Plant as Case Study

Authors: Sadi Ali, Yaser Kishawi

Abstract:

As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.

Keywords: SAT, wastewater quality, soil remediation, North Gaza

Procedia PDF Downloads 219
298 Reliability and Availability Analysis of Satellite Data Reception System using Reliability Modeling

Authors: Ch. Sridevi, S. P. Shailender Kumar, B. Gurudayal, A. Chalapathi Rao, K. Koteswara Rao, P. Srinivasulu

Abstract:

System reliability and system availability evaluation plays a crucial role in ensuring the seamless operation of complex satellite data reception system with consistent performance for longer periods. This paper presents a novel approach for the same using a case study on one of the antenna systems at satellite data reception ground station in India. The methodology involves analyzing system's components, their failure rates, system's architecture, generation of logical reliability block diagram model and estimating the reliability of the system using the component level mean time between failures considering exponential distribution to derive a baseline estimate of the system's reliability. The model is then validated with collected system level field failure data from the operational satellite data reception systems that includes failure occurred, failure time, criticality of the failure and repair times by using statistical techniques like median rank, regression and Weibull analysis to extract meaningful insights regarding failure patterns and practical reliability of the system and to assess the accuracy of the developed reliability model. The study mainly focused on identification of critical units within the system, which are prone to failures and have a significant impact on overall performance and brought out a reliability model of the identified critical unit. This model takes into account the interdependencies among system components and their impact on overall system reliability and provides valuable insights into the performance of the system to understand the Improvement or degradation of the system over a period of time and will be the vital input to arrive at the optimized design for future development. It also provides a plug and play framework to understand the effect on performance of the system in case of any up gradations or new designs of the unit. It helps in effective planning and formulating contingency plans to address potential system failures, ensuring the continuity of operations. Furthermore, to instill confidence in system users, the duration for which the system can operate continuously with the desired level of 3 sigma reliability was estimated that turned out to be a vital input to maintenance plan. System availability and station availability was also assessed by considering scenarios of clash and non-clash to determine the overall system performance and potential bottlenecks. Overall, this paper establishes a comprehensive methodology for reliability and availability analysis of complex satellite data reception systems. The results derived from this approach facilitate effective planning contingency measures, and provide users with confidence in system performance and enables decision-makers to make informed choices about system maintenance, upgrades and replacements. It also aids in identifying critical units and assessing system availability in various scenarios and helps in minimizing downtime and optimizing resource allocation.

Keywords: exponential distribution, reliability modeling, reliability block diagram, satellite data reception system, system availability, weibull analysis

Procedia PDF Downloads 60
297 Global Service-Learning: Lessons Learned from Teacher Candidates

Authors: Miranda Lin

Abstract:

This project examined the impact of a globally focused service-learning project implemented in a multicultural education course in a Midwestern university. This project facilitated critical self-reflection and build cross-cultural competence while nurturing a partnership with two schools that serve students with disabilities in Vietnam. Through a service-learning project, pre-service teachers connected via Skype with the principals/teachers at schools in Vietnam to identify and subsequently develop needed instructional materials for students with mild, moderate, and severe disabilities. Qualitative data sources include students’ intercultural competence self-reflection survey (pre-test and post-test), reflections, discussions, service project, and lesson plans. Literature Review- Global service-learning is a teaching strategy that encompasses service experiences both in the local community and abroad. Drawing on elements of global learning and international service-learning, global service-learning experiences are guided by a framework that is designed to support global learning outcomes and involve direct engagement with difference. By engaging in real-world challenges, global service-learning experiences can support the achievement of learning outcomes such as civic. Knowledge and intercultural knowledge and competence. Intercultural competence development is considered essential for cooperative and reciprocal engagement with community partners.Method- Participants (n=27*) were mostly elementary and early childhood pre-service teachers who were enrolled in a multicultural education course. All but one was female. Among the pre-service teachers, one Asian American, two Latinas, and the rest were White. Two pre-service teachers identified themselves as from the low socioeconomic families and the rest were from the middle to upper middle class.The global service-learning project was implemented in the spring of 2018. Two Vietnamese schools that served students with disabilities agreed to be the global service-learning sites. Both schools were located in an urban city.Systematic collection of data coincided with the course schedule as follows: an initial intercultural competence self-reflection survey completed in week one, guided reflections submitted in week 1, 9, and 16, written lesson plans and supporting materials for the service project submitted in week 16, and a final intercultural competence self-reflection survey completed in week 16. Significance-This global service-learning project has helped participants meet Merryfield’s goals in various degrees. They 1) learned knowledge and skills in the basics of instructional planning, 2) used a variety of instructional methods that encourage active learning, meet the different learning styles of students, and are congruent with content and educational goals, 3) gained the awareness and support of their students as individuals and as learners, 4) developed questioning techniques that build higher-level thinking skills, and 5) made progress in critically reflecting on and improving their own teaching and learning as a professional educator as a result of this project.

Keywords: global service-learning, teacher education, intercultural competence, diversity

Procedia PDF Downloads 93
296 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 58
295 Giving Children with Osteogenesis Imperfecta a Voice: Overview of a Participatory Approach for the Development of an Interactive Communication Tool

Authors: M. Siedlikowski, F. Rauch, A. Tsimicalis

Abstract:

Osteogenesis Imperfecta (OI) is a genetic disorder of childhood onset that causes frequent fractures after minimal physical stress. To date, OI research has focused on medically- and surgically-oriented outcomes with little attention on the perspective of the affected child. It is a challenge to elicit the child’s voice in health care, in other words, their own perspective on their symptoms, but software development offers a way forward. Sisom (Norwegian acronym derived from ‘Si det som det er’ meaning ‘Tell it as it is’) is an award-winning, rigorously tested, interactive, computerized tool that helps children with chronic illnesses express their symptoms to their clinicians. The successful Sisom software tool, that addresses the child directly, has not yet been adapted to attend to symptoms unique to children with OI. The purpose of this study was to develop a Sisom paper prototype for children with OI by seeking the perspectives of end users, particularly, children with OI and clinicians. Our descriptive qualitative study was conducted at Shriners Hospitals for Children® – Canada, which follows the largest cohort of children with OI in North America. Purposive sampling was used to recruit 12 children with OI over three cycles. Nine clinicians oversaw the development process, which involved determining the relevance of current Sisom symptoms, vignettes, and avatars, as well as generating new Sisom OI components. Data, including field notes, transcribed audio-recordings, and drawings, were deductively analyzed using content analysis techniques. Guided by the following framework, data pertaining to symptoms, vignettes, and avatars were coded into five categories: a) Relevant; b) Irrelevant; c) To modify; d) To add; e) Unsure. Overall, 70.8% of Sisom symptoms were deemed relevant for inclusion, with 49.4% directly incorporated, and 21.3% incorporated with changes to syntax, and/or vignette, and/or location. Three additions were made to the ‘Avatar’ island. This allowed children to celebrate their uniqueness: ‘Makes you feel like you’re not like everybody else.’ One new island, ‘About Me’, was added to capture children’s worldviews. One new sub-island, ‘Getting Around’, was added to reflect accessibility issues. These issues were related to the children’s independence, their social lives, as well as the perceptions of others. In being consulted as experts throughout the co-creation of the Sisom OI paper prototype, children coded the Sisom symptoms and provided sound rationales for their chosen codes. In rationalizing their codes, all children shared personal stories about themselves and their relationships, insights about their OI, and an understanding of the strengths and challenges they experience on a day-to-day basis. The child’s perspective on their health is a basic right, and allowing it to be heard is the next frontier in the care of children with genetic diseases. Sisom OI, a methodological breakthrough within OI research, will offer clinicians an innovative and child-centered approach to capture this neglected perspective. It will provide a tool for the delivery of health care in the center that established the worldwide standard of care for children with OI.

Keywords: child health, interactive computerized communication tool, participatory approach, symptom management

Procedia PDF Downloads 136
294 Selfie: Redefining Culture of Narcissism

Authors: Junali Deka

Abstract:

“Pictures speak more than a thousand words”. It is the power of image which can have multiple meanings the way it is read by the viewers. This research article is an outcome of the extensive study of the phenomenon of‘selfie culture’ and dire need of self-constructed virtual identity among youths. In the recent times, there has been a revolutionary change in the concept of photography in terms of both techniques and applications. The popularity of ‘self-portraits’ mainly depend on the temporal space and time created on social networking sites like Facebook, Instagram. With reference to Stuart’s Hall encoding and decoding process, the article studies the behavior of the users who post photographs online. The photographic messages (Roland Barthes) are interpreted differently by different viewers. The notion of ‘self’, ‘self-love and practice of looking (Marita Sturken) and ways of seeing (John Berger) got new definition and dimensional together. After Oscars Night, show host Ellen DeGeneres’s selfie created the most buzz and hype in the social media. The term was judged the word of 2013, and has earned its place in the dictionary. “In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time an Australian origin. The present study was carried to understand the concept of ‘selfie-bug’ and the phenomenon it has created among youth (especially students) at large in developing a pseudo-image of its own. The topic was relevant and gave a platform to discuss about the cultural, psychological and sociological implications of selfie in the age of digital technology. At the first level, content analysis of the primary and secondary sources including newspapers articles and online resources was carried out followed by a small online survey conducted with the help of questionnaire to find out the student’s view on selfie and its social and psychological effects. The newspapers reports and online resources confirmed that selfie is a new trend in the digital media and it has redefined the notion of beauty and self-love. The Facebook and Instagram are the major platforms used to express one-self and creation of virtual identity. The findings clearly reflected the active participation of female students in comparison to male students. The study of the photographs of few selected respondents revealed the difference of attitude and image building among male and female users. The study underlines some basic questions about the desire of reconstruction of identity among young generation, such as - are they becoming culturally narcissist; responsible factors for cultural, social and moral changes in the society, psychological and technological effects caused by Smartphone as well, culminating into a big question mark whether the selfie is a social signifier of identity construction.

Keywords: Culture, Narcissist, Photographs, Selfie

Procedia PDF Downloads 382
293 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 269
292 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 20
291 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 247
290 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal

Authors: Pedro B. Antunes, Paulo J. Ramísio

Abstract:

Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.

Keywords: coastal zones, monitoring, road runoff pollution, salt deposition

Procedia PDF Downloads 215
289 Performance Evaluation of Various Displaced Left Turn Intersection Designs

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.

Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model

Procedia PDF Downloads 288
288 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins of North Gaza Emergency Sewage Treatment Plant as Case Study

Authors: Sadi Ali, Yaser Kishawi

Abstract:

As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l, and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, North Gaza

Procedia PDF Downloads 221
287 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton

Authors: Bing Chen, Xiang Ni, Eric Li

Abstract:

With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.

Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton

Procedia PDF Downloads 79
286 Role of HIV-Support Groups in Mitigating Adverse Sexual Health Outcomes among HIV Positive Adolescents in Uganda

Authors: Lilian Nantume Wampande

Abstract:

Group-based strategies in the delivery of HIV care have opened up new avenues not only for meaningful participation for HIV positive people but also platforms for deconstruction and reconstruction of knowledge about living with the virus. Yet the contributions of such strategies among patients who live in high risk areas are still not explored. This case study research assessed the impact of HIV support networks on sexual health outcomes of HIV positive out-of-school adolescents residing in fishing islands of Kalangala in Uganda. The study population was out-of-school adolescents living with HIV and their sexual partners (n=269), members of their households (n=80) and their health service providers (n=15). Data were collected via structured interviews, observations and focus group discussions between August 2016 and March 2017. Data was then analyzed inductively to extract key themes related to the approaches and outcomes of the groups’ activities. The study findings indicate that support groups unite HIV positive adolescents in a bid for social renegotiation to achieve change but individual constraints surpass the groups’ intentions. Some adolescents for example reported increased fear which led to failure to cope, sexual violence, self-harm and denial of status as a result of the high expectations placed on them as members of the support groups. Further investigations around this phenomenon show that HIV networks play a monotonous role as information sources for HIV positive out-of-school adolescents which limit their creativity to seek information elsewhere. Results still indicate that HIV adolescent groups recognize the complexity of long-term treatment and stay in care leading to improved immunity for the majority yet; there is still scattered evidence about how effective they are among adolescents at different phases in the disease trajectory. Nevertheless, the primary focus of developing adolescent self-efficacy and coping skills significantly address a range of disclosure difficulties and supports autonomy. Moreover, the peer techniques utilized in addition to the almost homogeneous group characteristics accelerates positive confidence, hope and belongingness. Adolescent HIV-support groups therefore have the capacity to both improve and/or worsen sexual health outcomes for a young adolescent who is out-of-school. Communication interventions that seek to increase awareness about ‘self’ should therefore be emphasized more than just fostering collective action. Such interventions should be sensitive to context and gender. In addition, facilitative support supervision done by close and trusted health care providers, most preferably Village Health Teams (who are often community elected volunteers) would help to follow-up, mentor, encourage and advise this young adolescent in matters involving sexuality and health outcomes. HIV/AIDS prevention programs have extended their efforts beyond individual focus to those that foster collective action, but programs should rekindle interpersonal level strategies to address the complexity of individual behavior.

Keywords: adolescent, HIV, support groups, Uganda

Procedia PDF Downloads 114
285 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 171
284 Adolescents’ Reports of Dating Abuse: Mothers’ Responses

Authors: Beverly Black

Abstract:

Background: Adolescent dating abuse (ADA) is widespread throughout the world and negatively impacts many adolescents. ADA is associated with lower self-esteem, poorer school performance, lower employment opportunities, higher rates of depression, absenteeism from school, substance abuse, bullying, smoking, suicide, pregnancy, eating disorders, and risky sexual behaviors, and experiencing domestic violence later in life. ADA prevention is sometimes addressed through school programming; yet, parental responses to ADA can also be an important vehicle for its prevention. In this exploratory study, the author examined how mothers, including abused mothers, responded to scenarios of ADA involving their children. Methods: Six focus groups were conducted between December, 2013 and June, 2014 with mothers (n=31) in the southern part of the United States. Three of the focus groups were comprised of mothers (n=17) who had been abused by their partners. Mothers were recruited from local community family agencies. Participants were provided a series of four scenarios about ADA and they were asked to explain how they would respond. Focus groups lasted approximately 45 minutes. All participants were given a gift card to a major retailer as a ‘thank you’. Using QSR-N10, two researchers’ analyzed the focus group data first using open and axial coding techniques to find overarching themes. Researchers triangulated the coded data to ensure accurate interpretations of the participants’ messages and used the scenario questions to structure the coded results. Results: Almost 30% of 699 comments coded as mothers’ recommendations for responding to ADA focused on the importance of providing advice to their children. Advice included breaking up, going to police, ignoring or avoiding the abusive partner, and setting boundaries in relationships. About 22% of comments focused on the need for educating teens about healthy and unhealthy relationships and seeking additional information. About 13% of the comments reflected the view that parents should confront abuser and/or abusers’ parents, and less than 2% noted the need to take their child to counseling. Mothers who had been abused offered similar responses as parents who had not experienced abuse. However, their responses were more likely to focus on sharing their own experience exercising caution in their responses, as they knew from their own experiences that authoritarian responses were ineffective. Over half of the comments indicated that parents would react stronger, quicker, and angrier if a girl was being abused by a boy than vice versa; parents expressed greater fear for their daughters than their sons involved in ADA. Conclusions. Results suggest that mothers have ideas about how to respond to ADA. Mothers who have been abused draw from their experiences and are aware that responding in an authoritarian manner may not be helpful. Because parental influence on teens is critical in their development, it is important for all parents to respond to ADA in a helpful manner to break the cycle of violence. Understanding responses to ADA can inform prevention programming to work with parents in responding to ADA.

Keywords: abused mothers' responses to dating abuse, adolescent dating abuse, mothers' responses to dating abuse, teen dating violence

Procedia PDF Downloads 199
283 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz

Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.

Keywords: characterization, dissolution, Efavirenz, solid dispersions

Procedia PDF Downloads 611
282 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 199
281 Reduction of Specific Energy Consumption in Microfiltration of Bacillus velezensis Broth by Air Sparging and Turbulence Promoter

Authors: Jovana Grahovac, Ivana Pajcin, Natasa Lukic, Jelena Dodic, Aleksandar Jokic

Abstract:

To obtain purified biomass to be used in the plant pathogen biocontrol or as soil biofertilizer, it is necessary to eliminate residual broth components at the end of the fermentation process. The main drawback of membrane separation techniques is permeate flux decline due to the membrane fouling. Fouling mitigation measures increase the pressure drop along membrane channel due to the increased resistance to flow of the feed suspension, thus increasing the hydraulic power drop. At the same time, these measures lead to an increase in the permeate flux due to the reduced resistance of the filtration cake on the membrane surface. Because of these opposing effects, the energy efficiency of fouling mitigation measures is limited, and the justification of its application is provided by information on a reducing specific energy consumption compared to a case without any measures employed. In this study, the influence of static mixer (Kenics) and air-sparging (two-phase flow) on reduction of specific energy consumption (ER) was investigated. Cultivation Bacillus velezensis was carried out in the 3-L bioreactor (Biostat® Aplus) containing 2 L working volume with two parallel Rushton turbines and without internal baffles. Cultivation was carried out at 28 °C on at 150 rpm with an aeration rate of 0.75 vvm during 96 h. The experiments were carried out in a conventional cross-flow microfiltration unit. During experiments, permeate and retentate were recycled back to the broth vessel to simulate continuous process. The single channel ceramic membrane (TAMI Deutschland) used had a nominal pore size 200 nm with the length of 250 mm and an inner/external diameter of 6/10 mm. The useful membrane channel surface was 4.33×10⁻³ m². Air sparging was brought by the pressurized air connected by a three-way valve to the feed tube by a simple T-connector without diffusor. The different approaches to flux improvement are compared in terms of energy consumption. Reduction of specific energy consumption compared to microfiltration without fouling mitigation is around 49% and 63%, for use of two-phase flow and a static mixer, respectively. In the case of a combination of these two fouling mitigation methods, ER is 60%, i.e., slightly lower compared to the use of turbulence promoter alone. The reason for this result can be found in the fact that flux increase is more affected by the presence of a Kenics static mixer while sparging results in an increase of energy used during microfiltration. By comparing combined method with turbulence promoter flux enhancement method ER is negative (-7%) which can be explained by increased power consumption for air flow with moderate contribution to the flux increase. Another confirmation for this fact can be found by comparing energy consumption values for combined method with energy consumption in the case of two-phase flow. In this instance energy reduction (ER) is 22% that demonstrates that turbulence promoter is more efficient compared to two phase flow. Antimicrobial activity of Bacillus velezensis biomass against phytopathogenic isolates Xanthomonas campestris was preserved under different fouling reduction methods.

Keywords: Bacillus velezensis, microfiltration, static mixer, two-phase flow

Procedia PDF Downloads 98
280 Radiofrequency and Near-Infrared Responsive Core-Shell Multifunctional Nanostructures Using Lipid Templates for Cancer Theranostics

Authors: Animesh Pan, Geoffrey D. Bothun

Abstract:

With the development of nanotechnology, research in multifunctional delivery systems has a new pace and dimension. An incipient challenge is to design an all-in-one delivery system that can be used for multiple purposes, including tumor targeting therapy, radio-frequency (RF-), near-infrared (NIR-), light-, or pH-induced controlled release, photothermal therapy (PTT), photodynamic therapy (PDT), and medical diagnosis. In this regard, various inorganic nanoparticles (NPs) are known to show great potential as the 'functional components' because of their fascinating and tunable physicochemical properties and the possibility of multiple theranostic modalities from individual NPs. Magnetic, luminescent, and plasmonic properties are the three most extensively studied and, more importantly biomedically exploitable properties of inorganic NPs. Although successful attempts of combining any two of them above mentioned functionalities have been made, integrating them in one system has remained challenge. Keeping those in mind, controlled designs of complex colloidal nanoparticle system are one of the most significant challenges in nanoscience and nanotechnology. Therefore, systematic and planned studies providing better revelation are demanded. We report a multifunctional delivery platform-based liposome loaded with drug, iron-oxide magnetic nanoparticles (MNPs), and a gold shell on the surface of liposomes, were synthesized using a lipid with polyelectrolyte (layersomes) templating technique. MNPs and the anti-cancer drug doxorubicin (DOX) were co-encapsulated inside liposomes composed by zwitterionic phophatidylcholine and anionic phosphatidylglycerol using reverse phase evaporation (REV) method. The liposomes were coated with positively charge polyelectrolyte (poly-L-lysine) to enrich the interface with gold anion, exposed to a reducing agent to form a gold nanoshell, and then capped with thio-terminated polyethylene glycol (SH-PEG2000). The core-shell nanostructures were characterized by different techniques like; UV-Vis/NIR scanning spectrophotometer, dynamic light scattering (DLS), transmission electron microscope (TEM). This multifunctional system achieves a variety of functions, such as radiofrequency (RF)-triggered release, chemo-hyperthermia, and NIR laser-triggered for photothermal therapy. Herein, we highlight some of the remaining major design challenges in combination with preliminary studies assessing therapeutic objectives. We demonstrate an efficient loading and delivery system to significant cell death of human cancer cells (A549) with therapeutic capabilities. Coupled with RF and NIR excitation to the doxorubicin-loaded core-shell nanostructure helped in securing targeted and controlled drug release to the cancer cells. The present core-shell multifunctional system with their multimodal imaging and therapeutic capabilities would be eminent candidates for cancer theranostics.

Keywords: cancer thernostics, multifunctional nanostructure, photothermal therapy, radiofrequency targeting

Procedia PDF Downloads 106
279 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics

Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki

Abstract:

The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.

Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio

Procedia PDF Downloads 146